Update img storage location (#265)
Signed-off-by: Yue, Wenjiao <wenjiao.yue@intel.com> Signed-off-by: Spycsh <sihan.chen@intel.com> Signed-off-by: letonghan <letong.han@intel.com> Signed-off-by: zehao-intel <zehao.huang@intel.com>
@@ -1,6 +1,6 @@
|
||||
# AudioQnA
|
||||
|
||||

|
||||

|
||||
|
||||
In this example we will show you how to build an Audio Question and Answering application (AudioQnA). AudioQnA serves like a talking bot, enabling LLMs to talk with users. It basically accepts users' audio inputs, converts to texts and feed to LLMs, gets the text answers and converts back to audio outputs.
|
||||
|
||||
|
||||
BIN
AudioQnA/assets/img/audio_ui.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
AudioQnA/assets/img/audio_ui_record.png
Normal file
|
After Width: | Height: | Size: 50 KiB |
BIN
AudioQnA/assets/img/audioqna.jpg
Normal file
|
After Width: | Height: | Size: 48 KiB |
@@ -2,8 +2,8 @@
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ RAG bridges the knowledge gap by dynamically fetching relevant information from
|
||||
|
||||
ChatQnA architecture shows below:
|
||||
|
||||

|
||||

|
||||
|
||||
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.
|
||||
|
||||
|
||||
BIN
ChatQnA/assets/img/chat_ui_init.png
Normal file
|
After Width: | Height: | Size: 15 KiB |
BIN
ChatQnA/assets/img/chat_ui_response.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
ChatQnA/assets/img/chat_ui_upload.png
Normal file
|
After Width: | Height: | Size: 86 KiB |
BIN
ChatQnA/assets/img/chatqna_architecture.png
Normal file
|
After Width: | Height: | Size: 501 KiB |
@@ -261,4 +261,4 @@ To access the frontend, open the following URL in your browser: http://{host_ip}
|
||||
- "80:5173"
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -319,4 +319,4 @@ To access the frontend, open the following URL in your browser: http://{host_ip}
|
||||
- "80:5173"
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -16,7 +16,7 @@ In this example, we present a Code Copilot application to showcase how code gene
|
||||
|
||||
The workflow falls into the following architecture:
|
||||
|
||||

|
||||

|
||||
|
||||
# Deploy CodeGen Service
|
||||
|
||||
|
||||
BIN
CodeGen/assets/img/codeGen_ui_init.jpg
Normal file
|
After Width: | Height: | Size: 9.5 KiB |
BIN
CodeGen/assets/img/codegen_architecture.png
Normal file
|
After Width: | Height: | Size: 147 KiB |
BIN
CodeGen/assets/img/codegen_auto_complete.jpg
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
CodeGen/assets/img/codegen_copilot.png
Normal file
|
After Width: | Height: | Size: 289 KiB |
BIN
CodeGen/assets/img/codegen_customize.png
Normal file
|
After Width: | Height: | Size: 44 KiB |
BIN
CodeGen/assets/img/codegen_dialog.png
Normal file
|
After Width: | Height: | Size: 38 KiB |
BIN
CodeGen/assets/img/codegen_endpoint.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
BIN
CodeGen/assets/img/codegen_icon.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
CodeGen/assets/img/codegen_multi_line.png
Normal file
|
After Width: | Height: | Size: 51 KiB |
BIN
CodeGen/assets/img/codegen_qna.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
CodeGen/assets/img/codegen_select_code.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
CodeGen/assets/img/codegen_settings.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
CodeGen/assets/img/codegen_single_line.png
Normal file
|
After Width: | Height: | Size: 44 KiB |
BIN
CodeGen/assets/img/codegen_suggestion.png
Normal file
|
After Width: | Height: | Size: 31 KiB |
@@ -130,7 +130,7 @@ To access the frontend, open the following URL in your browser: `http://{host_ip
|
||||
- "80:5173"
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
## Install Copilot VSCode extension from Plugin Marketplace as the frontend
|
||||
|
||||
@@ -138,7 +138,7 @@ In addition to the Svelte UI, users can also install the Copilot VSCode extensio
|
||||
|
||||
Install `Neural Copilot` in VSCode as below.
|
||||
|
||||

|
||||

|
||||
|
||||
### How to Use
|
||||
|
||||
@@ -146,46 +146,46 @@ Install `Neural Copilot` in VSCode as below.
|
||||
|
||||
Please adjust the service URL in the extension settings based on the endpoint of the CodeGen backend service.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
#### Customize
|
||||
|
||||
The Copilot enables users to input their corresponding sensitive information and tokens in the user settings according to their own needs. This customization enhances the accuracy and output content to better meet individual requirements.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Code Suggestion
|
||||
|
||||
To trigger inline completion, you'll need to type `# {your keyword} (start with your programming language's comment keyword, like // in C++ and # in python)`. Make sure the `Inline Suggest` is enabled from the VS Code Settings.
|
||||
For example:
|
||||
|
||||

|
||||

|
||||
|
||||
To provide programmers with a smooth experience, the Copilot supports multiple ways to trigger inline code suggestions. If you are interested in the details, they are summarized as follows:
|
||||
|
||||
- Generate code from single-line comments: The simplest way introduced before.
|
||||
- Generate code from consecutive single-line comments:
|
||||
|
||||

|
||||

|
||||
|
||||
- Generate code from multi-line comments, which will not be triggered until there is at least one `space` outside the multi-line comment):
|
||||
|
||||

|
||||

|
||||
|
||||
- Automatically complete multi-line comments:
|
||||
|
||||

|
||||

|
||||
|
||||
### Chat with AI assistant
|
||||
|
||||
You can start a conversation with the AI programming assistant by clicking on the robot icon in the plugin bar on the left:
|
||||
|
||||

|
||||

|
||||
|
||||
Then you can see the conversation window on the left, where you can chat with the AI assistant:
|
||||
|
||||

|
||||

|
||||
|
||||
There are 4 areas worth noting as shown in the screenshot above:
|
||||
|
||||
@@ -199,8 +199,8 @@ For example:
|
||||
|
||||
- Select code
|
||||
|
||||

|
||||

|
||||
|
||||
- Ask question and get answer
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -137,7 +137,7 @@ To access the frontend, open the following URL in your browser: `http://{host_ip
|
||||
- "80:5173"
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
## Install Copilot VSCode extension from Plugin Marketplace as the frontend
|
||||
|
||||
@@ -145,7 +145,7 @@ In addition to the Svelte UI, users can also install the Copilot VSCode extensio
|
||||
|
||||
Install `Neural Copilot` in VSCode as below.
|
||||
|
||||

|
||||

|
||||
|
||||
### How to Use
|
||||
|
||||
@@ -153,46 +153,46 @@ Install `Neural Copilot` in VSCode as below.
|
||||
|
||||
Please adjust the service URL in the extension settings based on the endpoint of the code generation backend service.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
#### Customize
|
||||
|
||||
The Copilot enables users to input their corresponding sensitive information and tokens in the user settings according to their own needs. This customization enhances the accuracy and output content to better meet individual requirements.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Code Suggestion
|
||||
|
||||
To trigger inline completion, you'll need to type `# {your keyword} (start with your programming language's comment keyword, like // in C++ and # in python)`. Make sure the `Inline Suggest` is enabled from the VS Code Settings.
|
||||
For example:
|
||||
|
||||

|
||||

|
||||
|
||||
To provide programmers with a smooth experience, the Copilot supports multiple ways to trigger inline code suggestions. If you are interested in the details, they are summarized as follows:
|
||||
|
||||
- Generate code from single-line comments: The simplest way introduced before.
|
||||
- Generate code from consecutive single-line comments:
|
||||
|
||||

|
||||

|
||||
|
||||
- Generate code from multi-line comments, which will not be triggered until there is at least one `space` outside the multi-line comment):
|
||||
|
||||

|
||||

|
||||
|
||||
- Automatically complete multi-line comments:
|
||||
|
||||

|
||||

|
||||
|
||||
### Chat with AI assistant
|
||||
|
||||
You can start a conversation with the AI programming assistant by clicking on the robot icon in the plugin bar on the left:
|
||||
|
||||

|
||||

|
||||
|
||||
Then you can see the conversation window on the left, where you can chat with AI assistant:
|
||||
|
||||

|
||||

|
||||
|
||||
There are 4 areas worth noting as shown in the screenshot above:
|
||||
|
||||
@@ -206,8 +206,8 @@ For example:
|
||||
|
||||
- Select code
|
||||
|
||||

|
||||

|
||||
|
||||
- Ask question and get answer
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -4,7 +4,7 @@ Code translation is the process of converting code written in one programming la
|
||||
|
||||
The workflow falls into the following architecture:
|
||||
|
||||

|
||||

|
||||
|
||||
This Code Translation use case uses Text Generation Inference on Intel Gaudi2 or Intel Xeon Scalable Processor. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.
|
||||
|
||||
|
||||
BIN
CodeTrans/assets/img/codeTrans_ui_init.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
CodeTrans/assets/img/codeTrans_ui_response.png
Normal file
|
After Width: | Height: | Size: 76 KiB |
BIN
CodeTrans/assets/img/codeTrans_ui_select.png
Normal file
|
After Width: | Height: | Size: 34 KiB |
BIN
CodeTrans/assets/img/code_trans_architecture.png
Normal file
|
After Width: | Height: | Size: 120 KiB |
@@ -2,9 +2,9 @@
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -6,9 +6,9 @@ Large Language Models (LLMs) have revolutionized the way we interact with text.
|
||||
|
||||
The architecture for document summarization will be illustrated/described below:
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
# Deploy Document Summarization Service
|
||||
|
||||
|
||||
BIN
DocSum/assets/img/docSum_ui_exchange.png
Normal file
|
After Width: | Height: | Size: 147 KiB |
BIN
DocSum/assets/img/docSum_ui_response.png
Normal file
|
After Width: | Height: | Size: 168 KiB |
BIN
DocSum/assets/img/docSum_ui_text.png
Normal file
|
After Width: | Height: | Size: 229 KiB |
BIN
DocSum/assets/img/docSum_ui_upload.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
BIN
DocSum/assets/img/docsum_architecture.png
Normal file
|
After Width: | Height: | Size: 148 KiB |
BIN
DocSum/assets/img/docsum_workflow.png
Normal file
|
After Width: | Height: | Size: 91 KiB |
@@ -7,9 +7,9 @@ Large Language Models (LLMs) have revolutionized the way we interact with text,
|
||||
|
||||
The document summarization architecture shows below:
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
# Environment Setup
|
||||
|
||||
|
||||
@@ -128,4 +128,4 @@ export LANGCHAIN_API_KEY=ls_...
|
||||
|
||||
Open this URL `http://{host_ip}:5173` in your browser to access the frontend.
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -2,10 +2,10 @@
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -131,4 +131,4 @@ export LANGCHAIN_API_KEY=ls_...
|
||||
|
||||
Open this URL `http://{host_ip}:5173` in your browser to access the frontend.
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -18,7 +18,7 @@ By integrating search capabilities with LLMs within the LangChain framework, thi
|
||||
|
||||
The workflow falls into the following architecture:
|
||||
|
||||

|
||||

|
||||
|
||||
# Start Backend Service
|
||||
|
||||
|
||||
BIN
SearchQnA/assets/img/search_ui_init.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
@@ -3,7 +3,7 @@ Neural Chat</h1>
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ Language Translation is the communication of the meaning of a source-language te
|
||||
|
||||
The workflow falls into the following architecture:
|
||||
|
||||

|
||||

|
||||
|
||||
# Start Backend Service
|
||||
|
||||
|
||||
BIN
Translation/assets/img/trans_ui_init.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
Translation/assets/img/trans_ui_select.png
Normal file
|
After Width: | Height: | Size: 41 KiB |
BIN
Translation/assets/img/translation_architecture.png
Normal file
|
After Width: | Height: | Size: 103 KiB |
@@ -2,8 +2,8 @@
|
||||
|
||||
### 📸 Project Screenshots
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<h2>🧐 Features</h2>
|
||||
|
||||
|
||||
@@ -11,12 +11,12 @@ Some noteworthy use case examples for VQA include:
|
||||
|
||||
General architecture of VQA shows below:
|
||||
|
||||

|
||||

|
||||
|
||||
This example guides you through how to deploy a [LLaVA](https://llava-vl.github.io/) (Large Language and Vision Assistant) model on Intel Gaudi2 to do visual question and answering task. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products/) for more details.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
## Start the LLaVA service
|
||||
|
||||
|
||||