From 40e44dfcd66a78f898ba96198d41d42e4ac7547b Mon Sep 17 00:00:00 2001 From: Ying Hu Date: Tue, 6 May 2025 13:21:31 +0800 Subject: [PATCH] Update README.md of ChatQnA for broken URL (#1907) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Neo Zhang Jianyu --- ChatQnA/README.md | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/ChatQnA/README.md b/ChatQnA/README.md index cedfd8637..75de89d1b 100644 --- a/ChatQnA/README.md +++ b/ChatQnA/README.md @@ -96,20 +96,21 @@ flowchart LR The table below lists currently available deployment options. They outline in detail the implementation of this example on selected hardware. -| Category | Deployment Option | Description | -| ----------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| On-premise Deployments | Docker compose | [ChatQnA deployment on Xeon](./docker_compose/intel/cpu/xeon) | -| | | [ChatQnA deployment on AI PC](./docker_compose/intel/cpu/aipc) | -| | | [ChatQnA deployment on Gaudi](./docker_compose/intel/hpu/gaudi) | -| | | [ChatQnA deployment on Nvidia GPU](./docker_compose/nvidia/gpu) | -| | | [ChatQnA deployment on AMD ROCm](./docker_compose/amd/gpu/rocm) | -| | Kubernetes | [Helm Charts](./kubernetes/helm) | -| Cloud Service Providers | AWS | [Terraform deployment on 4th Gen Intel Xeon with Intel AMX using meta-llama/Meta-Llama-3-8B-Instruct ](https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna) | -| | | [Terraform deployment on 4th Gen Intel Xeon with Intel AMX using TII Falcon2-11B](https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B) | -| | GCP | [Terraform deployment on 5th Gen Intel Xeon with Intel AMX(support Confidential AI by using Intel® TDX](https://github.com/intel/terraform-intel-gcp-vm/tree/main/examples/gen-ai-xeon-opea-chatqna) | -| | Azure | [Terraform deployment on 4th/5th Gen Intel Xeon with Intel AMX & Intel TDX](https://github.com/intel/terraform-intel-azure-linux-vm/tree/main/examples/azure-gen-ai-xeon-opea-chatqna-tdx) | -| | Intel Tiber AI Cloud | Coming Soon | -| | Any Xeon based Ubuntu system | [ChatQnA Ansible Module for Ubuntu 20.04](https://github.com/intel/optimized-cloud-recipes/tree/main/recipes/ai-opea-chatqna-xeon) .Use this if you are not using Terraform and have provisioned your system either manually or with another tool, including directly on bare metal. | +| Category | Deployment Option | Description | +| ------------------------------------------------------------------------------------------------------------------------------ | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| On-premise Deployments | Docker compose | [ChatQnA deployment on Xeon](./docker_compose/intel/cpu/xeon/README.md) | +| | | [ChatQnA deployment on AI PC](./docker_compose/intel/cpu/aipc/README.md) | +| | | [ChatQnA deployment on Gaudi](./docker_compose/intel/hpu/gaudi/README.md) | +| | | [ChatQnA deployment on Nvidia GPU](./docker_compose/nvidia/gpu/README.md) | +| | | [ChatQnA deployment on AMD ROCm](./docker_compose/amd/gpu/rocm/README.md) | +| Cloud Platforms Deployment on AWS, GCP, Azure, IBM Cloud,Oracle Cloud, [Intel® Tiber™ AI Cloud](https://ai.cloud.intel.com/) | Docker Compose | [Getting Started Guide: Deploy the ChatQnA application across multiple cloud platforms](https://github.com/opea-project/docs/tree/main/getting-started/README.md) | +| | Kubernetes | [Helm Charts](./kubernetes/helm/README.md) | +| Automated Terraform Deployment on Cloud Service Providers | AWS | [Terraform deployment on 4th Gen Intel Xeon with Intel AMX using meta-llama/Meta-Llama-3-8B-Instruct ](https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna) | +| | | [Terraform deployment on 4th Gen Intel Xeon with Intel AMX using TII Falcon2-11B](https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B) | +| | GCP | [Terraform deployment on 5th Gen Intel Xeon with Intel AMX(support Confidential AI by using Intel® TDX](https://github.com/intel/terraform-intel-gcp-vm/tree/main/examples/gen-ai-xeon-opea-chatqna) | +| | Azure | [Terraform deployment on 4th/5th Gen Intel Xeon with Intel AMX & Intel TDX](https://github.com/intel/terraform-intel-azure-linux-vm/tree/main/examples/azure-gen-ai-xeon-opea-chatqna-tdx) | +| | Intel Tiber AI Cloud | Coming Soon | +| | Any Xeon based Ubuntu system | [ChatQnA Ansible Module for Ubuntu 20.04](https://github.com/intel/optimized-cloud-recipes/tree/main/recipes/ai-opea-chatqna-xeon). Use this if you are not using Terraform and have provisioned your system either manually or with another tool, including directly on bare metal. | ## Monitor and Tracing