Enable OpenTelemtry Tracing for ChatQnA on Xeon and Gaudi by docker compose merge feature (#1488)
Signed-off-by: Louie, Tsai <louie.tsai@intel.com> Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
This commit is contained in:
@@ -44,6 +44,14 @@ To set up environment variables for deploying ChatQnA services, follow these ste
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
To enable Open Telemetry Tracing, compose.telemetry.yaml file need to be merged along with default compose.yaml file.
|
||||
CPU example with Open Telemetry feature:
|
||||
|
||||
```bash
|
||||
cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/xeon/
|
||||
docker compose -f compose.yaml -f compose.telemetry.yaml up -d
|
||||
```
|
||||
|
||||
It will automatically download the docker image on `docker hub`:
|
||||
|
||||
```bash
|
||||
@@ -263,12 +271,16 @@ If use vLLM as the LLM serving backend.
|
||||
docker compose -f compose.yaml up -d
|
||||
# Start ChatQnA without Rerank Pipeline
|
||||
docker compose -f compose_without_rerank.yaml up -d
|
||||
# Start ChatQnA with Rerank Pipeline and Open Telemetry Tracing
|
||||
docker compose -f compose.yaml -f compose.telemetry.yaml up -d
|
||||
```
|
||||
|
||||
If use TGI as the LLM serving backend.
|
||||
|
||||
```bash
|
||||
docker compose -f compose_tgi.yaml up -d
|
||||
# Start ChatQnA with Open Telemetry Tracing
|
||||
docker compose -f compose_tgi.yaml -f compose_tgi.telemetry.yaml up -d
|
||||
```
|
||||
|
||||
### Validate Microservices
|
||||
|
||||
Reference in New Issue
Block a user