update README with format correction (#200)

Signed-off-by: Zheng, Zhaoqiong <zhaoqiong.zheng@intel.com>
This commit is contained in:
ZhaoqiongZ
2024-05-29 18:52:49 +08:00
committed by GitHub
parent 2d9a9da7b4
commit f55398379b

View File

@@ -8,7 +8,7 @@ The workflow falls into the following architecture:
# Start Backend Service
1. Start the TGI service to deploy your LLM
1. Start the TGI Service to deploy your LLM
```sh
cd serving/tgi_gaudi
@@ -16,9 +16,9 @@ bash build_docker.sh
bash launch_tgi_service.sh
```
`launch_tgi_service.sh` by default uses `8080` as the TGI service's port. Please replace it if there are any port conflicts.
`launch_tgi_service.sh` the script uses `8080` as the TGI service's port by default. Please replace it if any port conflicts detected.
2. Start the Language Translation service
2. Start the Language Translation Service
```sh
cd langchain/docker
@@ -26,13 +26,13 @@ bash build_docker.sh
docker run -it --name translation_server --net=host --ipc=host -e TGI_ENDPOINT=${TGI_ENDPOINT} -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e SERVER_PORT=8000 -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} translation:latest bash
```
Here is the explanation of some of the above parameters:
**Note**: Set the following parameters before running the above command
- `TGI_ENDPOINT`: The endpoint of your TGI service, usually equal to `<ip of your machine>:<port of your TGI service>`.
- `HUGGINGFACEHUB_API_TOKEN`: Your HuggingFace hub API token, usually generated [here](https://huggingface.co/settings/tokens).
- `SERVER_PORT`: The port of the Translation service on the host.
3. Quick test
3. Quick Test
```sh
curl http://localhost:8000/v1/translation \