From 3e796ba73d88184624c6316313304bcd05c84978 Mon Sep 17 00:00:00 2001 From: David Kinder Date: Tue, 24 Sep 2024 09:40:42 -0400 Subject: [PATCH] doc: fix missing references to README.md (#860) Signed-off-by: David B. Kinder --- AgentQnA/README.md | 2 +- AudioQnA/benchmark/accuracy/README.md | 2 +- AudioQnA/kubernetes/intel/README_gmc.md | 2 +- ChatQnA/README.md | 4 ++-- ChatQnA/benchmark/performance/README.md | 2 +- ChatQnA/kubernetes/intel/README_gmc.md | 4 ++-- CodeGen/README.md | 4 ++-- CodeGen/benchmark/accuracy/README.md | 2 +- CodeGen/kubernetes/intel/README_gmc.md | 2 +- CodeTrans/README.md | 4 ++-- CodeTrans/kubernetes/intel/README_gmc.md | 2 +- DocIndexRetriever/README.md | 4 ++-- DocSum/README.md | 6 +++--- DocSum/kubernetes/intel/README_gmc.md | 2 +- FaqGen/benchmark/accuracy/README.md | 2 +- LEGAL_INFORMATION.md | 4 ++-- README.md | 6 +++--- SearchQnA/README.md | 4 ++-- SearchQnA/kubernetes/intel/README_gmc.md | 2 +- Translation/kubernetes/intel/README_gmc.md | 2 +- VisualQnA/kubernetes/intel/README_gmc.md | 2 +- 21 files changed, 32 insertions(+), 32 deletions(-) diff --git a/AgentQnA/README.md b/AgentQnA/README.md index fcb96c2a5..596a0fdb2 100644 --- a/AgentQnA/README.md +++ b/AgentQnA/README.md @@ -103,4 +103,4 @@ curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: app ## How to register your own tools with agent -You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain#5-customize-agent-strategy). +You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md#5-customize-agent-strategy). diff --git a/AudioQnA/benchmark/accuracy/README.md b/AudioQnA/benchmark/accuracy/README.md index 67119121a..557dd0562 100644 --- a/AudioQnA/benchmark/accuracy/README.md +++ b/AudioQnA/benchmark/accuracy/README.md @@ -14,7 +14,7 @@ We evaluate the WER (Word Error Rate) metric of the ASR microservice. ### Launch ASR microservice -Launch the ASR microserice with the following commands. For more details please refer to [doc](https://github.com/opea-project/GenAIComps/tree/main/comps/asr). +Launch the ASR microserice with the following commands. For more details please refer to [doc](https://github.com/opea-project/GenAIComps/tree/main/comps/asr/whisper/README.md). ```bash git clone https://github.com/opea-project/GenAIComps diff --git a/AudioQnA/kubernetes/intel/README_gmc.md b/AudioQnA/kubernetes/intel/README_gmc.md index 432282259..30d879e19 100644 --- a/AudioQnA/kubernetes/intel/README_gmc.md +++ b/AudioQnA/kubernetes/intel/README_gmc.md @@ -4,7 +4,7 @@ This document outlines the deployment process for a AudioQnA application utilizi The AudioQnA Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere. -Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install. +Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install. The AudioQnA application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not starts them and then proceeds to connect them. When the AudioQnA pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular `asr`, `tts`, and `llm`. diff --git a/ChatQnA/README.md b/ChatQnA/README.md index 875f44f5d..06c3452c1 100644 --- a/ChatQnA/README.md +++ b/ChatQnA/README.md @@ -240,7 +240,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions o Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information. -Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi. +Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi. ### Deploy ChatQnA on AI PC @@ -306,7 +306,7 @@ Two ways of consuming ChatQnA Service: ## Troubleshooting -1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example: +1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example: ```bash http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json' diff --git a/ChatQnA/benchmark/performance/README.md b/ChatQnA/benchmark/performance/README.md index 9c5d75af0..f5a9e062f 100644 --- a/ChatQnA/benchmark/performance/README.md +++ b/ChatQnA/benchmark/performance/README.md @@ -90,7 +90,7 @@ find . -name '*.yaml' -type f -exec sed -i "s#\$(RERANK_MODEL_ID)#${RERANK_MODEL ### Benchmark tool preparation -The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master. +The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/README.md) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master. ```bash # on k8s-master node diff --git a/ChatQnA/kubernetes/intel/README_gmc.md b/ChatQnA/kubernetes/intel/README_gmc.md index 99799391b..08dc38516 100644 --- a/ChatQnA/kubernetes/intel/README_gmc.md +++ b/ChatQnA/kubernetes/intel/README_gmc.md @@ -2,9 +2,9 @@ This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline components on Intel Xeon server and Gaudi machines. -The ChatQnA Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere. +The ChatQnA Service leverages a Kubernetes operator called genai-microservices-connector (GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere. -Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install. +Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install. The ChatQnA application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not starts them and then proceeds to connect them. When the ChatQnA RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular `embedding`, `retriever`, `rerank`, and `llm`. diff --git a/CodeGen/README.md b/CodeGen/README.md index fcf0f3e33..c415d4d2c 100644 --- a/CodeGen/README.md +++ b/CodeGen/README.md @@ -106,7 +106,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions o Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information. -Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi. +Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen/README.md) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi. ## Consume CodeGen Service @@ -128,7 +128,7 @@ Two ways of consuming CodeGen Service: ## Troubleshooting -1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example: +1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example: ```bash http_proxy="" diff --git a/CodeGen/benchmark/accuracy/README.md b/CodeGen/benchmark/accuracy/README.md index 16d21e1a3..4e52a93e0 100644 --- a/CodeGen/benchmark/accuracy/README.md +++ b/CodeGen/benchmark/accuracy/README.md @@ -8,7 +8,7 @@ We evaluate accuracy by [bigcode-evaluation-harness](https://github.com/bigcode- ### Launch CodeGen microservice -Please refer to [CodeGen Examples](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen), follow the guide to deploy CodeGen megeservice. +Please refer to [CodeGen Examples](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/README.md), follow the guide to deploy CodeGen megeservice. Use `curl` command to test codegen service and ensure that it has started properly diff --git a/CodeGen/kubernetes/intel/README_gmc.md b/CodeGen/kubernetes/intel/README_gmc.md index 893e4fccc..4dd04174b 100644 --- a/CodeGen/kubernetes/intel/README_gmc.md +++ b/CodeGen/kubernetes/intel/README_gmc.md @@ -2,7 +2,7 @@ This document outlines the deployment process for a Code Generation (CodeGen) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines. -Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. +Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. If you have only Intel Xeon machines you could use the codegen_xeon.yaml file or if you have a Gaudi cluster you could use codegen_gaudi.yaml In the below example we illustrate on Xeon. diff --git a/CodeTrans/README.md b/CodeTrans/README.md index a1b95b154..33d27439c 100644 --- a/CodeTrans/README.md +++ b/CodeTrans/README.md @@ -99,7 +99,7 @@ Refer to the [Code Translation Kubernetes Guide](./kubernetes/intel/README.md) Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information. -Refer to the [CodeTrans helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans) for instructions on deploying CodeTrans into Kubernetes on Xeon & Gaudi. +Refer to the [CodeTrans helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans/README.md) for instructions on deploying CodeTrans into Kubernetes on Xeon & Gaudi. ## Consume Code Translation Service @@ -121,7 +121,7 @@ By default, the UI runs on port 5173 internally. ## Troubleshooting -1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example: +1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example: ```bash http_proxy="" diff --git a/CodeTrans/kubernetes/intel/README_gmc.md b/CodeTrans/kubernetes/intel/README_gmc.md index ed2b146ac..1b932f4ea 100644 --- a/CodeTrans/kubernetes/intel/README_gmc.md +++ b/CodeTrans/kubernetes/intel/README_gmc.md @@ -2,7 +2,7 @@ This document outlines the deployment process for a Code Translation (CodeTran) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines. -Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. +Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. If you have only Intel Xeon machines you could use the codetrans_xeon.yaml file or if you have a Gaudi cluster you could use codetrans_gaudi.yaml In the below example we illustrate on Xeon. diff --git a/DocIndexRetriever/README.md b/DocIndexRetriever/README.md index d14b52de5..bfd09a830 100644 --- a/DocIndexRetriever/README.md +++ b/DocIndexRetriever/README.md @@ -4,5 +4,5 @@ DocRetriever are the most widely adopted use case for leveraging the different m ## We provided DocRetriever with different deployment infra -- [docker xeon version](docker_compose/intel/cpu/xeon/) => minimum endpoints, easy to setup -- [docker gaudi version](docker_compose/intel/hpu/gaudi/) => with extra tei_gaudi endpoint, faster +- [docker xeon version](docker_compose/intel/cpu/xeon/README.md) => minimum endpoints, easy to setup +- [docker gaudi version](docker_compose/intel/hpu/gaudi/README.md) => with extra tei_gaudi endpoint, faster diff --git a/DocSum/README.md b/DocSum/README.md index ca1ebfeba..d87dd55f6 100644 --- a/DocSum/README.md +++ b/DocSum/README.md @@ -21,7 +21,7 @@ Currently we support two ways of deploying Document Summarization services with docker pull opea/docsum:latest ``` -2. Start services using the docker images `built from source`: [Guide](./docker_compose) +2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose) ### Required Models @@ -98,7 +98,7 @@ Refer to [Kubernetes deployment](./kubernetes/intel/README.md) Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information. -Refer to the [DocSum helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum) for instructions on deploying DocSum into Kubernetes on Xeon & Gaudi. +Refer to the [DocSum helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum/README.md) for instructions on deploying DocSum into Kubernetes on Xeon & Gaudi. ### Workflow of the deployed Document Summarization Service @@ -143,7 +143,7 @@ Two ways of consuming Document Summarization Service: ## Troubleshooting -1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example: +1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example: ```bash http_proxy="" diff --git a/DocSum/kubernetes/intel/README_gmc.md b/DocSum/kubernetes/intel/README_gmc.md index bc55df156..b33229211 100644 --- a/DocSum/kubernetes/intel/README_gmc.md +++ b/DocSum/kubernetes/intel/README_gmc.md @@ -3,7 +3,7 @@ This document outlines the deployment process for a Document Summary (DocSum) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines. The DocSum Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file, in addition it allows the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private clouds elsewhere. -Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. +Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. The DocSum application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not it starts them and then proceeds to connect them. When the DocSum RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular embedding, retriever, rerank, and llm. diff --git a/FaqGen/benchmark/accuracy/README.md b/FaqGen/benchmark/accuracy/README.md index 1c180c395..1ff2ce1f1 100644 --- a/FaqGen/benchmark/accuracy/README.md +++ b/FaqGen/benchmark/accuracy/README.md @@ -16,7 +16,7 @@ python get_context.py ### Launch FaQGen microservice -Please refer to [FaQGen microservice](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/faq-generation/tgi), set up an microservice endpoint. +Please refer to [FaQGen microservice](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/faq-generation/tgi/langchain/README.md), set up an microservice endpoint. ``` export FAQ_ENDPOINT = "http://${your_ip}:9000/v1/faqgen" diff --git a/LEGAL_INFORMATION.md b/LEGAL_INFORMATION.md index fcd51a7a5..f0cfa8c5a 100644 --- a/LEGAL_INFORMATION.md +++ b/LEGAL_INFORMATION.md @@ -9,9 +9,9 @@ Generative AI Examples is licensed under [Apache License Version 2.0](http://www This software includes components that have separate copyright notices and licensing terms. Your use of the source code for these components is subject to the terms and conditions of the following licenses. -- [Third Party Programs](/third-party-programs.txt) +- [Third Party Programs](third-party-programs.txt) -See the accompanying [license](/LICENSE) file for full license text and copyright notices. +See the accompanying [license](LICENSE) file for full license text and copyright notices. ## Citation diff --git a/README.md b/README.md index 40aa39e1a..87581d3dd 100644 --- a/README.md +++ b/README.md @@ -30,8 +30,8 @@ Deployment are based on released docker images by default, check [docker image l #### Prerequisite - For Docker Compose based deployment, you should have docker compose installed. Refer to [docker compose install](https://docs.docker.com/compose/install/). -- For Kubernetes based deployment, we provide 3 ways from the easiest manifests to powerful [GMC](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector) based deployment. - - You should have a kubernetes cluster ready for use. If not, you can refer to [k8s install](https://github.com/opea-project/docs/tree/main/guide/installation/k8s_install) to deploy one. +- For Kubernetes based deployment, we provide 3 ways from the easiest manifests to powerful [GMC](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md) based deployment. + - You should have a kubernetes cluster ready for use. If not, you can refer to [k8s install](https://github.com/opea-project/docs/tree/main/guide/installation/k8s_install/README.md) to deploy one. - (Optional) You should have GMC installed to your kubernetes cluster if you want to try with GMC. Refer to [GMC install](https://github.com/opea-project/docs/blob/main/guide/installation/gmc_install/gmc_install.md) for more information. - (Optional) You should have Helm (version >= 3.15) installed if you want to deploy with Helm Charts. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information. @@ -68,4 +68,4 @@ Thank you for being a part of this journey. We can't wait to see what we can ach - [Code of Conduct](https://github.com/opea-project/docs/tree/main/community/CODE_OF_CONDUCT.md) - [Security Policy](https://github.com/opea-project/docs/tree/main/community/SECURITY.md) -- [Legal Information](/LEGAL_INFORMATION.md) +- [Legal Information](LEGAL_INFORMATION.md) diff --git a/SearchQnA/README.md b/SearchQnA/README.md index 988283be6..ec8362703 100644 --- a/SearchQnA/README.md +++ b/SearchQnA/README.md @@ -32,7 +32,7 @@ Currently we support two ways of deploying SearchQnA services with docker compos docker pull opea/searchqna:latest ``` -2. Start services using the docker images `built from source`: [Guide](./docker_compose) +2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/SearchQnA/docker_compose/) ### Setup Environment Variable @@ -110,7 +110,7 @@ Two ways of consuming SearchQnA Service: ## Troubleshooting -1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example: +1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example: ```bash http_proxy="" diff --git a/SearchQnA/kubernetes/intel/README_gmc.md b/SearchQnA/kubernetes/intel/README_gmc.md index ec721b8ab..1307b6bbd 100644 --- a/SearchQnA/kubernetes/intel/README_gmc.md +++ b/SearchQnA/kubernetes/intel/README_gmc.md @@ -2,7 +2,7 @@ This document outlines the deployment process for a Code Generation (SearchQnA) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines. -Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. +Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. If you have only Intel Xeon machines you could use the searchQnA_xeon.yaml file or if you have a Gaudi cluster you could use searchQnA_gaudi.yaml In the below example we illustrate on Xeon. diff --git a/Translation/kubernetes/intel/README_gmc.md b/Translation/kubernetes/intel/README_gmc.md index 47f5229b8..911aaa0ea 100644 --- a/Translation/kubernetes/intel/README_gmc.md +++ b/Translation/kubernetes/intel/README_gmc.md @@ -2,7 +2,7 @@ This document outlines the deployment process for a Code Generation (Translation) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines. -Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. +Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. If you have only Intel Xeon machines you could use the translation_xeon.yaml file or if you have a Gaudi cluster you could use translation_gaudi.yaml In the below example we illustrate on Xeon. diff --git a/VisualQnA/kubernetes/intel/README_gmc.md b/VisualQnA/kubernetes/intel/README_gmc.md index df19d9bac..75669d4e3 100644 --- a/VisualQnA/kubernetes/intel/README_gmc.md +++ b/VisualQnA/kubernetes/intel/README_gmc.md @@ -2,7 +2,7 @@ This document outlines the deployment process for a Visual Question Answering (VisualQnA) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines. -Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. +Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. If you have only Intel Xeon machines you could use the visualqna_xeon.yaml file or if you have a Gaudi cluster you could use visualqna_gaudi.yaml In the below example we illustrate on Xeon.