merged InstructionTuning and RerankFinetuning into Finetuning.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
This commit is contained in:
91
Finetuning/README.md
Normal file
91
Finetuning/README.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Finetuning
|
||||
|
||||
This example includes instruction tuning and rerank model finetuning. Instruction tuning is the process of further training LLMs on a dataset consisting of (instruction, output) pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. Rerank model finetuning is the process of further training rerank model on a dataset for improving its capability on specific field. The implementation of this example deploys a Ray cluster for the task.
|
||||
|
||||
## Deploy Finetuning Service
|
||||
|
||||
### Deploy Finetuning Service on Xeon
|
||||
|
||||
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for detail.
|
||||
|
||||
### Deploy Finetuning Service on Gaudi
|
||||
|
||||
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) for detail.
|
||||
|
||||
## Consume Finetuning Service
|
||||
|
||||
### 1. Upload a training file
|
||||
|
||||
#### Instruction tuning dataset example
|
||||
|
||||
Download a training file `alpaca_data.json` and upload it to the server with below command, this file can be downloaded in [here](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json):
|
||||
|
||||
```bash
|
||||
# upload a training file
|
||||
curl http://${your_ip}:8015/v1/files -X POST -H "Content-Type: multipart/form-data" -F "file=@./alpaca_data.json" -F purpose="fine-tune"
|
||||
```
|
||||
|
||||
#### Rerank model finetuning dataset example
|
||||
|
||||
Download a toy example training file `toy_finetune_data.jsonl` and upload it to the server with below command, this file can be downloaded in [here](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/toy_finetune_data.jsonl):
|
||||
|
||||
```bash
|
||||
# upload a training file
|
||||
curl http://${your_ip}:8015/v1/files -X POST -H "Content-Type: multipart/form-data" -F "file=@./toy_finetune_data.jsonl" -F purpose="fine-tune"
|
||||
```
|
||||
|
||||
### 2. Create fine-tuning job
|
||||
|
||||
#### Instruction tuning
|
||||
|
||||
After a training file like `alpaca_data.json` is uploaded, use the following command to launch a finetuning job using `meta-llama/Llama-2-7b-chat-hf` as base model:
|
||||
|
||||
```bash
|
||||
# create a finetuning job
|
||||
curl http://${your_ip}:8015/v1/fine_tuning/jobs \
|
||||
-X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"training_file": "alpaca_data.json",
|
||||
"model": "meta-llama/Llama-2-7b-chat-hf"
|
||||
}'
|
||||
```
|
||||
|
||||
The outputs of the finetune job (adapter_model.safetensors, adapter_config,json... ) are stored in `/home/user/comps/finetuning/src/output` and other execution logs are stored in `/home/user/ray_results`
|
||||
|
||||
#### Rerank model finetuning
|
||||
|
||||
After a training file `toy_finetune_data.jsonl` is uploaded, use the following command to launch a finetuning job using `BAAI/bge-reranker-large` as base model:
|
||||
|
||||
```bash
|
||||
# create a finetuning job
|
||||
curl http://${your_ip}:8015/v1/fine_tuning/jobs \
|
||||
-X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"training_file": "toy_finetune_data.jsonl",
|
||||
"model": "BAAI/bge-reranker-large",
|
||||
"General":{
|
||||
"task":"rerank",
|
||||
"lora_config":null
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### 3. Manage fine-tuning job
|
||||
|
||||
Below commands show how to list finetuning jobs, retrieve a finetuning job, cancel a finetuning job and list checkpoints of a finetuning job.
|
||||
|
||||
```bash
|
||||
# list finetuning jobs
|
||||
curl http://${your_ip}:8015/v1/fine_tuning/jobs -X GET
|
||||
|
||||
# retrieve one finetuning job
|
||||
curl http://${your_ip}:8015/v1/fine_tuning/jobs/retrieve -X POST -H "Content-Type: application/json" -d '{"fine_tuning_job_id": ${fine_tuning_job_id}}'
|
||||
|
||||
# cancel one finetuning job
|
||||
curl http://${your_ip}:8015/v1/fine_tuning/jobs/cancel -X POST -H "Content-Type: application/json" -d '{"fine_tuning_job_id": ${fine_tuning_job_id}}'
|
||||
|
||||
# list checkpoints of a finetuning job
|
||||
curl http://${your_ip}:8015/v1/finetune/list_checkpoints -X POST -H "Content-Type: application/json" -d '{"fine_tuning_job_id": ${fine_tuning_job_id}}'
|
||||
```
|
||||
26
Finetuning/docker_compose/intel/cpu/xeon/README.md
Normal file
26
Finetuning/docker_compose/intel/cpu/xeon/README.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Deploy Finetuning Service on Xeon
|
||||
|
||||
This document outlines the deployment process for a finetuning Service utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice on Intel Xeon server. The steps include Docker image creation, container deployment. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
|
||||
|
||||
## 🚀 Build Docker Images
|
||||
|
||||
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
|
||||
|
||||
### 1. Build Docker Image
|
||||
|
||||
Build docker image with below command:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git
|
||||
cd GenAIComps
|
||||
export HF_TOKEN=${your_huggingface_token}
|
||||
docker build -t opea/finetuning:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg HF_TOKEN=$HF_TOKEN -f comps/finetuning/src/Dockerfile .
|
||||
```
|
||||
|
||||
### 2. Run Docker with CLI
|
||||
|
||||
Start docker container with below command:
|
||||
|
||||
```bash
|
||||
docker run -d --name="finetuning-server" -p 8015:8015 --runtime=runc --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy opea/finetuning:latest
|
||||
```
|
||||
26
Finetuning/docker_compose/intel/hpu/gaudi/README.md
Normal file
26
Finetuning/docker_compose/intel/hpu/gaudi/README.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Deploy Finetuning Service on Gaudi
|
||||
|
||||
This document outlines the deployment process for a finetuning Service utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice on Intel Gaudi server. The steps include Docker image creation, container deployment. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
|
||||
|
||||
## 🚀 Build Docker Images
|
||||
|
||||
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
|
||||
|
||||
### 1. Build Docker Image
|
||||
|
||||
Build docker image with below command:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git
|
||||
cd GenAIComps
|
||||
docker build -t opea/finetuning-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/finetuning/src/Dockerfile.intel_hpu .
|
||||
```
|
||||
|
||||
### 2. Run Docker with CLI
|
||||
|
||||
Start docker container with below command:
|
||||
|
||||
```bash
|
||||
export HF_TOKEN=${your_huggingface_token}
|
||||
docker run --runtime=habana -e HABANA_VISIBLE_DEVICES=all -p 8015:8015 -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host -e https_proxy=$https_proxy -e http_proxy=$http_proxy -e no_proxy=$no_proxy -e HF_TOKEN=$HF_TOKEN opea/finetuning-gaudi:latest
|
||||
```
|
||||
22
Finetuning/docker_image_build/build.yaml
Normal file
22
Finetuning/docker_image_build/build.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
# Copyright (C) 2024 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
services:
|
||||
finetuning:
|
||||
build:
|
||||
args:
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
no_proxy: ${no_proxy}
|
||||
context: GenAIComps
|
||||
dockerfile: comps/finetuning/src/Dockerfile
|
||||
image: ${REGISTRY:-opea}/finetuning:${TAG:-latest}
|
||||
finetuning-gaudi:
|
||||
build:
|
||||
args:
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
no_proxy: ${no_proxy}
|
||||
context: GenAIComps
|
||||
dockerfile: comps/finetuning/src/Dockerfile.intel_hpu
|
||||
image: ${REGISTRY:-opea}/finetuning-gaudi:${TAG:-latest}
|
||||
200
Finetuning/tests/test_compose_on_gaudi.sh
Normal file
200
Finetuning/tests/test_compose_on_gaudi.sh
Normal file
File diff suppressed because one or more lines are too long
201
Finetuning/tests/test_compose_on_xeon.sh
Normal file
201
Finetuning/tests/test_compose_on_xeon.sh
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user