Update img storage location (#265)

Signed-off-by: Yue, Wenjiao <wenjiao.yue@intel.com>
Signed-off-by: Spycsh <sihan.chen@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: zehao-intel <zehao.huang@intel.com>
This commit is contained in:
WenjiaoYue
2024-06-11 16:26:48 +08:00
committed by GitHub
parent 02c7baae2b
commit 4d36def840
57 changed files with 59 additions and 59 deletions

View File

@@ -1,6 +1,6 @@
# AudioQnA
![audioqna](https://i.imgur.com/2hit8HL.jpeg)
![audioqna](./assets/img/audioqna.jpg)
In this example we will show you how to build an Audio Question and Answering application (AudioQnA). AudioQnA serves like a talking bot, enabling LLMs to talk with users. It basically accepts users' audio inputs, converts to texts and feed to LLMs, gets the text answers and converts back to audio outputs.

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -2,8 +2,8 @@
### 📸 Project Screenshots
![project-screenshot](https://imgur.com/qrt8Lce.png)
![project-screenshot](https://imgur.com/L12DP8Y.png)
![project-screenshot](../../assets/img/audio_ui.png)
![project-screenshot](../../assets/img/audio_ui_record.png)
<h2>🧐 Features</h2>

View File

@@ -6,7 +6,7 @@ RAG bridges the knowledge gap by dynamically fetching relevant information from
ChatQnA architecture shows below:
![architecture](https://i.imgur.com/lLOnQio.png)
![architecture](./assets/img/chatqna_architecture.png)
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 501 KiB

View File

@@ -261,4 +261,4 @@ To access the frontend, open the following URL in your browser: http://{host_ip}
- "80:5173"
```
![project-screenshot](https://i.imgur.com/26zMnEr.png)
![project-screenshot](../../assets/img/chat_ui_init.png)

View File

@@ -2,9 +2,9 @@
### 📸 Project Screenshots
![project-screenshot](https://i.imgur.com/26zMnEr.png)
![project-screenshot](https://i.imgur.com/fZbOiTk.png)
![project-screenshot](https://i.imgur.com/FnY3MuU.png)
![project-screenshot](../../../assets/img/chat_ui_init.png)
![project-screenshot](../../../assets/img/chat_ui_response.png)
![project-screenshot](../../../assets/img/chat_ui_upload.png)
<h2>🧐 Features</h2>

View File

@@ -319,4 +319,4 @@ To access the frontend, open the following URL in your browser: http://{host_ip}
- "80:5173"
```
![project-screenshot](https://i.imgur.com/26zMnEr.png)
![project-screenshot](../../assets/img/chat_ui_init.png)

View File

@@ -16,7 +16,7 @@ In this example, we present a Code Copilot application to showcase how code gene
The workflow falls into the following architecture:
![architecture](https://i.imgur.com/G9ozwFX.png)
![architecture](./assets/img/codegen_architecture.png)
# Deploy CodeGen Service

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

View File

@@ -130,7 +130,7 @@ To access the frontend, open the following URL in your browser: `http://{host_ip
- "80:5173"
```
![project-screenshot](https://imgur.com/d1SmaRb.png)
![project-screenshot](../../assets/img/codeGen_ui_init.jpg)
## Install Copilot VSCode extension from Plugin Marketplace as the frontend
@@ -138,7 +138,7 @@ In addition to the Svelte UI, users can also install the Copilot VSCode extensio
Install `Neural Copilot` in VSCode as below.
![Install-screenshot](https://i.imgur.com/cnHRAdD.png)
![Install-screenshot](../../assets/img/codegen_copilot.png)
### How to Use
@@ -146,46 +146,46 @@ Install `Neural Copilot` in VSCode as below.
Please adjust the service URL in the extension settings based on the endpoint of the CodeGen backend service.
![Setting-screenshot](https://i.imgur.com/4hjvKPu.png)
![Setting-screenshot](https://i.imgur.com/AQZuzqd.png)
![Setting-screenshot](../../assets/img/codegen_settings.png)
![Setting-screenshot](../../assets/img/codegen_endpoint.png)
#### Customize
The Copilot enables users to input their corresponding sensitive information and tokens in the user settings according to their own needs. This customization enhances the accuracy and output content to better meet individual requirements.
![Customize](https://i.imgur.com/PkObak9.png)
![Customize](../../assets/img/codegen_customize.png)
#### Code Suggestion
To trigger inline completion, you'll need to type `# {your keyword} (start with your programming language's comment keyword, like // in C++ and # in python)`. Make sure the `Inline Suggest` is enabled from the VS Code Settings.
For example:
![code suggestion](https://i.imgur.com/sH5UoTO.png)
![code suggestion](../../assets/img/codegen_suggestion.png)
To provide programmers with a smooth experience, the Copilot supports multiple ways to trigger inline code suggestions. If you are interested in the details, they are summarized as follows:
- Generate code from single-line comments: The simplest way introduced before.
- Generate code from consecutive single-line comments:
![codegen from single-line comments](https://i.imgur.com/GZsQywX.png)
![codegen from single-line comments](../../assets/img/codegen_single_line.png)
- Generate code from multi-line comments, which will not be triggered until there is at least one `space` outside the multi-line comment):
![codegen from multi-line comments](https://i.imgur.com/PzhiWrG.png)
![codegen from multi-line comments](../../assets/img/codegen_multi_line.png)
- Automatically complete multi-line comments:
![auto complete](https://i.imgur.com/cJO3PQ0.jpg)
![auto complete](../../assets/img/codegen_auto_complete.jpg)
### Chat with AI assistant
You can start a conversation with the AI programming assistant by clicking on the robot icon in the plugin bar on the left:
![icon](https://i.imgur.com/f7rzfCQ.png)
![icon](../../assets/img/codegen_icon.png)
Then you can see the conversation window on the left, where you can chat with the AI assistant:
![dialog](https://i.imgur.com/aiYzU60.png)
![dialog](../../assets/img/codegen_dialog.png)
There are 4 areas worth noting as shown in the screenshot above:
@@ -199,8 +199,8 @@ For example:
- Select code
![select code](https://i.imgur.com/grvrtY6.png)
![select code](../../assets/img/codegen_select_code.png)
- Ask question and get answer
![qna](https://i.imgur.com/8Kdpld7.png)
![qna](../../assets/img/codegen_qna.png)

View File

@@ -2,7 +2,7 @@
### 📸 Project Screenshots
![project-screenshot](https://imgur.com/d1SmaRb.png)
![project-screenshot](../../../assets/img/codeGen_ui_init.jpg)
<h2>🧐 Features</h2>

View File

@@ -137,7 +137,7 @@ To access the frontend, open the following URL in your browser: `http://{host_ip
- "80:5173"
```
![project-screenshot](https://imgur.com/d1SmaRb.png)
![project-screenshot](../../assets/img/codeGen_ui_init.jpg)
## Install Copilot VSCode extension from Plugin Marketplace as the frontend
@@ -145,7 +145,7 @@ In addition to the Svelte UI, users can also install the Copilot VSCode extensio
Install `Neural Copilot` in VSCode as below.
![Install-screenshot](https://i.imgur.com/cnHRAdD.png)
![Install-screenshot](../../assets/img/codegen_copilot.png)
### How to Use
@@ -153,46 +153,46 @@ Install `Neural Copilot` in VSCode as below.
Please adjust the service URL in the extension settings based on the endpoint of the code generation backend service.
![Setting-screenshot](https://i.imgur.com/4hjvKPu.png)
![Setting-screenshot](https://i.imgur.com/AQZuzqd.png)
![Setting-screenshot](../../assets/img/codegen_settings.png)
![Setting-screenshot](../../assets/img/codegen_endpoint.png)
#### Customize
The Copilot enables users to input their corresponding sensitive information and tokens in the user settings according to their own needs. This customization enhances the accuracy and output content to better meet individual requirements.
![Customize](https://i.imgur.com/PkObak9.png)
![Customize](../../assets/img/codegen_customize.png)
#### Code Suggestion
To trigger inline completion, you'll need to type `# {your keyword} (start with your programming language's comment keyword, like // in C++ and # in python)`. Make sure the `Inline Suggest` is enabled from the VS Code Settings.
For example:
![code suggestion](https://i.imgur.com/sH5UoTO.png)
![code suggestion](../../assets/img/codegen_suggestion.png)
To provide programmers with a smooth experience, the Copilot supports multiple ways to trigger inline code suggestions. If you are interested in the details, they are summarized as follows:
- Generate code from single-line comments: The simplest way introduced before.
- Generate code from consecutive single-line comments:
![codegen from single-line comments](https://i.imgur.com/GZsQywX.png)
![codegen from single-line comments](../../assets/img/codegen_single_line.png)
- Generate code from multi-line comments, which will not be triggered until there is at least one `space` outside the multi-line comment):
![codegen from multi-line comments](https://i.imgur.com/PzhiWrG.png)
![codegen from multi-line comments](../../assets/img/codegen_multi_line.png)
- Automatically complete multi-line comments:
![auto complete](https://i.imgur.com/cJO3PQ0.jpg)
![auto complete](../../assets/img/codegen_auto_complete.jpg)
### Chat with AI assistant
You can start a conversation with the AI programming assistant by clicking on the robot icon in the plugin bar on the left:
![icon](https://i.imgur.com/f7rzfCQ.png)
![icon](../../assets/img/codegen_icon.png)
Then you can see the conversation window on the left, where you can chat with AI assistant:
![dialog](https://i.imgur.com/aiYzU60.png)
![dialog](../../assets/img/codegen_dialog.png)
There are 4 areas worth noting as shown in the screenshot above:
@@ -206,8 +206,8 @@ For example:
- Select code
![select code](https://i.imgur.com/grvrtY6.png)
![select code](../../assets/img/codegen_select_code.png)
- Ask question and get answer
![qna](https://i.imgur.com/8Kdpld7.png)
![qna](../../assets/img/codegen_qna.png)

View File

@@ -4,7 +4,7 @@ Code translation is the process of converting code written in one programming la
The workflow falls into the following architecture:
![architecture](https://i.imgur.com/ums0brC.png)
![architecture](./assets/img/code_trans_architecture.png)
This Code Translation use case uses Text Generation Inference on Intel Gaudi2 or Intel Xeon Scalable Processor. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

View File

@@ -2,9 +2,9 @@
### 📸 Project Screenshots
![project-screenshot](https://imgur.com/1M4xjok.png)
![project-screenshot](https://imgur.com/IIbG4HN.png)
![project-screenshot](https://imgur.com/FbThcUY.png)
![project-screenshot](../../../assets/img/codeTrans_ui_init.png)
![project-screenshot](../../../assets/img/codeTrans_ui_select.png)
![project-screenshot](../../../assets/img/codeTrans_ui_response.png)
<h2>🧐 Features</h2>

View File

@@ -6,9 +6,9 @@ Large Language Models (LLMs) have revolutionized the way we interact with text.
The architecture for document summarization will be illustrated/described below:
![Architecture](https://i.imgur.com/XT0YUhu.png)
![Architecture](./assets/img/docsum_architecture.png)
![Workflow](https://i.imgur.com/m9Ac9wy.png)
![Workflow](./assets/img/docsum_workflow.png)
# Deploy Document Summarization Service

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 229 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@@ -7,9 +7,9 @@ Large Language Models (LLMs) have revolutionized the way we interact with text,
The document summarization architecture shows below:
![Architecture](https://i.imgur.com/XT0YUhu.png)
![Architecture](../assets/img/docsum_architecture.png)
![Workflow](https://i.imgur.com/m9Ac9wy.png)
![Workflow](../assets/img/docsum_workflow.png)
# Environment Setup

View File

@@ -128,4 +128,4 @@ export LANGCHAIN_API_KEY=ls_...
Open this URL `http://{host_ip}:5173` in your browser to access the frontend.
![project-screenshot](https://i.imgur.com/26zMnEr.png)
![project-screenshot](../../assets/img/docSum_ui_text.png)

View File

@@ -2,10 +2,10 @@
### 📸 Project Screenshots
![project-screenshot](https://imgur.com/oRuDrGX.png)
![project-screenshot](https://imgur.com/j6vo4gl.png)
![project-screenshot](https://imgur.com/LPBvBmM.png)
![project-screenshot](https://imgur.com/yHryOQS.png)
![project-screenshot](../../../assets/img/docSum_ui_upload.png)
![project-screenshot](../../../assets/img/docSum_ui_exchange.png)
![project-screenshot](../../../assets/img/docSum_ui_response.png)
![project-screenshot](../../../assets/img/docSum_ui_text.png)
<h2>🧐 Features</h2>

View File

@@ -131,4 +131,4 @@ export LANGCHAIN_API_KEY=ls_...
Open this URL `http://{host_ip}:5173` in your browser to access the frontend.
![project-screenshot](https://i.imgur.com/26zMnEr.png)
![project-screenshot](../../assets/img/docSum_ui_text.png)

View File

@@ -18,7 +18,7 @@ By integrating search capabilities with LLMs within the LangChain framework, thi
The workflow falls into the following architecture:
![architecture](https://i.imgur.com/Caer3DT.png)
![architecture](./assets/img/searchqna.png)
# Start Backend Service

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -3,7 +3,7 @@ Neural Chat</h1>
### 📸 Project Screenshots
![project-screenshot](https://imgur.com/YFakQ7J.png)
![project-screenshot](../../assets/img/search_ui_init.png)
<h2>🧐 Features</h2>

View File

@@ -4,7 +4,7 @@ Language Translation is the communication of the meaning of a source-language te
The workflow falls into the following architecture:
![architecture](https://i.imgur.com/5f9hoAW.png)
![architecture](./assets/img/translation_architecture.png)
# Start Backend Service

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

View File

@@ -2,8 +2,8 @@
### 📸 Project Screenshots
![project-screenshot](https://imgur.com/yT2VDBX.png)
![project-screenshot](https://imgur.com/8ajC7lE.png)
![project-screenshot](../../assets/img/trans_ui_init.png)
![project-screenshot](../../assets/img/trans_ui_select.png)
<h2>🧐 Features</h2>

View File

@@ -11,12 +11,12 @@ Some noteworthy use case examples for VQA include:
General architecture of VQA shows below:
![VQA](https://i.imgur.com/BUntlSn.png)
![VQA](./assets/img/vqa.png)
This example guides you through how to deploy a [LLaVA](https://llava-vl.github.io/) (Large Language and Vision Assistant) model on Intel Gaudi2 to do visual question and answering task. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products/) for more details.
![llava screenshot](https://i.imgur.com/Sqmoql8.png)
![llava-screenshot](https://i.imgur.com/4wETEe7.png)
![llava screenshot](./assets/img/llava_screenshot1.png)
![llava-screenshot](./assets/img/llava_screenshot2.png)
## Start the LLaVA service