update containers

This commit is contained in:
tylertitsworth
2024-03-26 08:54:14 -07:00
parent 5ec21b94ee
commit 0f1659765a
14 changed files with 125 additions and 121 deletions

View File

@@ -3,16 +3,7 @@ This ChatQnA use case performs RAG using LangChain, Redis vectordb and Text Gene
# Environment Setup
To use [🤗 text-generation-inference](https://github.com/huggingface/text-generation-inference) on Habana Gaudi/Gaudi2, please follow these steps:
## Prepare Docker
Getting started is straightforward with the official Docker container. Simply pull the image using:
```bash
docker pull ghcr.io/huggingface/tgi-gaudi:1.2.1
```
Alternatively, you can build the Docker image yourself with:
## Build TGI Gaudi Docker Image
```bash
bash ./serving/tgi_gaudi/build_docker.sh
```
@@ -77,7 +68,7 @@ docker cp 262e04bbe466:/usr/src/optimum-habana/examples/text-generation/quantiza
```bash
docker run -d -p 8080:80 -e QUANT_CONFIG=/data/maxabs_quant.json -e HUGGING_FACE_HUB_TOKEN=<your HuggingFace token> -v $volume:/data --
runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host tgi_gaudi --
runtime=habana -e HABANA_VISIBLE_DEVICES="4,5,6" -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host tgi_gaudi --
model-id meta-llama/Llama-2-7b-hf
```
@@ -86,21 +77,11 @@ Now the TGI Gaudi will launch the FP8 model by default. Please note that current
## Launch Redis
```bash
docker pull redis/redis-stack:latest
docker compose -f langchain/docker/docker-compose-redis.yml up -d
```
## Launch LangChain Docker
### Build LangChain Docker Image
```bash
cd langchain/docker/
bash ./build_docker.sh
```
### Lanuch LangChain Docker
Update the `HUGGINGFACEHUB_API_TOKEN` environment variable with your huggingface token in the `docker-compose-langchain.yml`
```bash
@@ -108,6 +89,9 @@ docker compose -f docker-compose-langchain.yml up -d
cd ../../
```
> [!NOTE]
> If you modified any files and want that change introduced in this step, add `--build` to the end of the command to build the container image instead of pulling it from dockerhub.
## Ingest data into redis
After every time of redis container is launched, data should be ingested in the container ingestion steps:
@@ -122,17 +106,6 @@ Note: `ingest.py` will download the embedding model, please set the proxy if nec
# Start LangChain Server
## Enable GuardRails using Meta's Llama Guard model (Optional)
We offer content moderation support utilizing Meta's [Llama Guard](https://huggingface.co/meta-llama/LlamaGuard-7b) model. To activate GuardRails, kindly follow the instructions below to deploy the Llama Guard model on TGI Gaudi.
```bash
volume=$PWD/data
model_id="meta-llama/LlamaGuard-7b"
docker run -p 8088:80 -v $volume:/data --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host -e HUGGING_FACE_HUB_TOKEN=<your HuggingFace token> -e HTTPS_PROXY=$https_proxy -e HTTP_PROXY=$https_proxy tgi_gaudi --model-id $model_id
export SAFETY_GUARD_ENDPOINT="http://xxx.xxx.xxx.xxx:8088"
```
## Start the Backend Service
Make sure TGI-Gaudi service is running and also make sure data is populated into Redis. Launch the backend service:

View File

@@ -1,36 +1,23 @@
FROM langchain/langchain
FROM langchain/langchain:latest
ARG http_proxy
ARG https_proxy
ENV http_proxy=$http_proxy
ENV https_proxy=$https_proxy
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
libgl1-mesa-glx \
libjemalloc-dev
RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/
RUN pip install --upgrade pip \
sentence-transformers \
redis \
unstructured \
unstructured[all-docs] \
langchain-cli \
pydantic==1.10.13 \
langchain==0.1.12 \
poetry \
pymupdf \
easyocr \
langchain_benchmarks \
pyarrow \
jupyter \
intel-extension-for-pytorch \
intel-openmp
USER user
ENV PYTHONPATH=/ws:/qna-app/app
COPY requirements.txt /tmp/requirements.txt
COPY qna-app /qna-app
WORKDIR /qna-app
RUN pip install --upgrade pip && \
pip install -r /tmp/requirements.txt
ENV PYTHONPATH=/home/user:/home/user/qna-app/app
WORKDIR /home/user/qna-app
COPY qna-app /home/user/qna-app
ENTRYPOINT ["/usr/bin/sleep", "infinity"]

View File

@@ -1,3 +0,0 @@
#!/bin/bash
docker build . -t qna-rag-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy

View File

@@ -1,9 +1,16 @@
version: '3'
services:
qna-rag-redis-server:
image: qna-rag-redis:latest
build:
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
dockerfile: Dockerfile
image: intel/gen-ai-examples:qna-rag-redis-server
container_name: qna-rag-redis-server
environment:
- http_proxy=${http_proxy}
- https_proxy=${https_proxy}
- "REDIS_PORT=6379"
- "EMBED_MODEL=BAAI/bge-base-en-v1.5"
- "REDIS_SCHEMA=schema_dim_768.yml"

View File

@@ -0,0 +1,15 @@
easyocr
intel-extension-for-pytorch
intel-openmp
jupyter
langchain_benchmarks
langchain-cli
langchain==0.1.12
poetry
pyarrow
pydantic==1.10.13
pymupdf
redis
sentence-transformers
unstructured
unstructured[pdf]

View File

@@ -4,17 +4,9 @@ Code generation is a noteworthy application of Large Language Model (LLM) techno
# Environment Setup
To use [🤗 text-generation-inference](https://github.com/huggingface/text-generation-inference) on Intel Gaudi2, please follow these steps:
## Prepare Gaudi Image
Getting started is straightforward with the official Docker container. Simply pull the image using:
## Build TGI Gaudi Docker Image
```bash
docker pull ghcr.io/huggingface/tgi-gaudi:1.2.1
```
Alternatively, you can build the Docker image yourself with:
```bash
bash ./serving/tgi_gaudi/build_docker.sh
bash ./tgi_gaudi/build_docker.sh
```
## Launch TGI Gaudi Service
@@ -43,7 +35,7 @@ export TGI_ENDPOINT="xxx.xxx.xxx.xxx:8080"
## Launch Copilot Docker
### Build Copilot Docker Image
### Build Copilot Docker Image (Optional)
```bash
cd codegen
@@ -54,10 +46,9 @@ cd ..
### Lanuch Copilot Docker
```bash
docker run -it --net=host --ipc=host -v /var/run/docker.sock:/var/run/docker.sock copilot:latest
docker run -it -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} --net=host --ipc=host -v /var/run/docker.sock:/var/run/docker.sock intel/gen-ai-examples:copilot bash
```
# Start Copilot Server
## Start the Backend Service

View File

@@ -12,6 +12,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
FROM langchain/langchain:latest
docker build . -t copilot:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev
RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/
USER user
COPY requirements.txt /tmp/requirements.txt
RUN pip install -U -r /tmp/requirements.txt
ENV PYTHONPATH=/home/user:/home/user/codegen-app
WORKDIR /home/user/codegen-app
COPY codegen-app /home/user/codegen-app
SHELL ["/bin/bash", "-c"]

View File

@@ -0,0 +1,5 @@
huggingface_hub
langchain-cli
langchain==0.1.11
pydantic==1.10.13
shortuuid

View File

@@ -42,7 +42,7 @@ export TGI_ENDPOINT="http://xxx.xxx.xxx.xxx:8080"
## Launch Document Summary Docker
### Build Document Summary Docker Image
### Build Document Summary Docker Image (Optional)
```bash
cd langchain/docker/
@@ -53,7 +53,7 @@ cd ../../
### Lanuch Document Summary Docker
```bash
docker run -it --net=host --ipc=host -v /var/run/docker.sock:/var/run/docker.sock document-summarize:latest
docker run -it --net=host --ipc=host -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} -v /var/run/docker.sock:/var/run/docker.sock intel/gen-ai-examples:document-summarize bash
```

View File

@@ -1,35 +1,21 @@
FROM langchain/langchain
FROM langchain/langchain:latest
ARG http_proxy
ARG https_proxy
ENV http_proxy=$http_proxy
ENV https_proxy=$https_proxy
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
libgl1-mesa-glx \
libjemalloc-dev
RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/
RUN pip install --upgrade pip \
sentence-transformers \
langchain-cli \
pydantic==1.10.13 \
langchain==0.1.12 \
poetry \
langchain_benchmarks \
pyarrow \
jupyter \
docx2txt \
pypdf \
beautifulsoup4 \
python-multipart \
intel-extension-for-pytorch \
intel-openmp
USER user
ENV PYTHONPATH=/ws:/summarize-app/app
COPY requirements.txt /tmp/requirements.txt
COPY summarize-app /summarize-app
WORKDIR /summarize-app
RUN pip install --upgrade pip && \
pip install -r /tmp/requirements.txt
CMD ["/bin/bash"]
ENV PYTHONPATH=/home/user:/home/user/summarize-app/app
WORKDIR /home/user/summarize-app
COPY summarize-app /home/user/summarize-app

View File

@@ -0,0 +1,14 @@
beautifulsoup4
docx2txt
intel-extension-for-pytorch
intel-openmp
jupyter
langchain_benchmarks
langchain-cli
langchain==0.1.12
poetry
pyarrow
pydantic==1.10.13
pypdf
python-multipart
sentence-transformers

View File

@@ -12,13 +12,13 @@ This example guides you through how to deploy a [LLaVA](https://llava-vl.github.
```
cd serving/
docker build . --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${http_proxy} -t llava_gaudi:latest
docker build . --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${http_proxy} -t intel/gen-ai-examples:llava-gaudi
```
2. Start the LLaVA service on Intel Gaudi2
```
docker run -d -p 8084:80 -p 8085:8000 -v ./data:/root/.cache/huggingface/hub/ -e http_proxy=$http_proxy -e https_proxy=$http_proxy -v $PWD/llava_server:/llava_server --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host llava_gaudi
docker run -d -p 8084:80 -p 8085:8000 -v ./data:/root/.cache/huggingface/hub/ -e http_proxy=$http_proxy -e https_proxy=$http_proxy -v $PWD/llava_server:/llava_server --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host intel/gen-ai-examples:llava-gaudi
```
Here are some explanation about the above parameters:

View File

@@ -13,22 +13,28 @@
# limitations under the License.
# HABANA environment
FROM vault.habana.ai/gaudi-docker/1.14.0/ubuntu22.04/habanalabs/pytorch-installer-2.1.1 as hpu
FROM vault.habana.ai/gaudi-docker/1.14.0/ubuntu22.04/habanalabs/pytorch-installer-2.1.1 AS hpu
# Set environment variables
ENV LANG=en_US.UTF-8
ENV PYTHONPATH=/root:/usr/lib/habanalabs/:/optimum-habana
ENV PYTHONPATH=/home/user:/usr/lib/habanalabs/:/optimum-habana
RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/
USER user
# Install required branch
RUN git clone https://github.com/lkk12014402/optimum-habana.git && \
cd optimum-habana && \
git checkout enable_llava_generation
RUN git clone https://github.com/lkk12014402/optimum-habana.git /optimum-habana -b enable_llava_generation
COPY requirements.txt /tmp/requirements.txt
# Install dependency
RUN pip install --upgrade-strategy eager optimum[habana] && \
pip install fastapi uvicorn
RUN pip install --no-cache-dir -U -r /tmp/requirements.txt
# work dir should contains the server
WORKDIR /llava_server
COPY llava_server /llava_server
ENTRYPOINT ["python", "llava_server.py"]
ENTRYPOINT ["python", "llava_server.py"]

View File

@@ -0,0 +1,4 @@
eager
fastapi
optimum[habana]
uvicorn