From f55398379b04dfd1ec4383806690e33d508de65d Mon Sep 17 00:00:00 2001 From: ZhaoqiongZ <106125927+ZhaoqiongZ@users.noreply.github.com> Date: Wed, 29 May 2024 18:52:49 +0800 Subject: [PATCH] update README with format correction (#200) Signed-off-by: Zheng, Zhaoqiong --- Translation/README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/Translation/README.md b/Translation/README.md index cee9a0cc5..111cab5a6 100644 --- a/Translation/README.md +++ b/Translation/README.md @@ -8,7 +8,7 @@ The workflow falls into the following architecture: # Start Backend Service -1. Start the TGI service to deploy your LLM +1. Start the TGI Service to deploy your LLM ```sh cd serving/tgi_gaudi @@ -16,9 +16,9 @@ bash build_docker.sh bash launch_tgi_service.sh ``` -`launch_tgi_service.sh` by default uses `8080` as the TGI service's port. Please replace it if there are any port conflicts. +`launch_tgi_service.sh` the script uses `8080` as the TGI service's port by default. Please replace it if any port conflicts detected. -2. Start the Language Translation service +2. Start the Language Translation Service ```sh cd langchain/docker @@ -26,13 +26,13 @@ bash build_docker.sh docker run -it --name translation_server --net=host --ipc=host -e TGI_ENDPOINT=${TGI_ENDPOINT} -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e SERVER_PORT=8000 -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} translation:latest bash ``` -Here is the explanation of some of the above parameters: +**Note**: Set the following parameters before running the above command - `TGI_ENDPOINT`: The endpoint of your TGI service, usually equal to `:`. - `HUGGINGFACEHUB_API_TOKEN`: Your HuggingFace hub API token, usually generated [here](https://huggingface.co/settings/tokens). - `SERVER_PORT`: The port of the Translation service on the host. -3. Quick test +3. Quick Test ```sh curl http://localhost:8000/v1/translation \