Fix Xeon reference per its trademark (#803)

Signed-off-by: Malini Bhandaru <malini.bhandaru@intel.com>
This commit is contained in:
Malini Bhandaru
2024-09-12 21:08:55 -07:00
committed by GitHub
parent 558ea3bb7f
commit e1b8ce053b
8 changed files with 11 additions and 11 deletions

View File

@@ -4,7 +4,7 @@ AudioQnA is an example that demonstrates the integration of Generative AI (GenAI
## Deploy AudioQnA Service
The AudioQnA service can be deployed on either Intel Gaudi2 or Intel XEON Scalable Processor.
The AudioQnA service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.
### Deploy AudioQnA on Gaudi

View File

@@ -95,7 +95,7 @@ flowchart LR
```
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint.
@@ -114,7 +114,7 @@ In the below, we provide a table that describes for each microservice component
## Deploy ChatQnA Service
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
Two types of ChatQnA pipeline are supported now: `ChatQnA with/without Rerank`. And the `ChatQnA without Rerank` pipeline (including Embedding, Retrieval, and LLM) is offered for Xeon customers who can not run rerank service on HPU yet require high performance and accuracy.

View File

@@ -10,7 +10,7 @@ The architecture for document summarization will be illustrated/described below:
## Deploy Document Summarization Service
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
Based on whether you want to use Docker or Kubernetes, follow the instructions below.
Currently we support two ways of deploying Document Summarization services with docker compose:

View File

@@ -6,7 +6,7 @@ Our FAQ Generation Application leverages the power of large language models (LLM
## Deploy FAQ Generation Service
The FAQ Generation service can be deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The FAQ Generation service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
### Deploy FAQ Generation on Gaudi

View File

@@ -22,7 +22,7 @@ The workflow falls into the following architecture:
## Deploy SearchQnA Service
The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
Currently we support two ways of deploying SearchQnA services with docker compose:

View File

@@ -6,11 +6,11 @@ Translation architecture shows below:
![architecture](./assets/img/translation_architecture.png)
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
## Deploy Translation Service
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
### Deploy Translation on Gaudi

View File

@@ -85,14 +85,14 @@ flowchart LR
DP <-.->|d|VDB
```
- This project implements a Retrieval-Augmented Generation (RAG) workflow using LangChain, Intel VDMS VectorDB, and Text Generation Inference, optimized for Intel XEON Scalable Processors.
- This project implements a Retrieval-Augmented Generation (RAG) workflow using LangChain, Intel VDMS VectorDB, and Text Generation Inference, optimized for Intel Xeon Scalable Processors.
- Video Processing: Videos are converted into feature vectors using mean aggregation and stored in the VDMS vector store.
- Query Handling: When a user submits a query, the system performs a similarity search in the vector store to retrieve the best-matching videos.
- Contextual Inference: The retrieved videos are then sent to the Large Vision Model (LVM) for inference, providing supplemental context for the query.
## Deploy VideoQnA Service
The VideoQnA service can be effortlessly deployed on Intel XEON Scalable Processors.
The VideoQnA service can be effortlessly deployed on Intel Xeon Scalable Processors.
### Required Models

View File

@@ -30,7 +30,7 @@ You can choose other llava-next models, such as `llava-hf/llava-v1.6-vicuna-13b-
## Deploy VisualQnA Service
The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
Currently we support deploying VisualQnA services with docker compose.