doc: Fix headings (#706)

Only one H1 heading with the title is allowed.  The rest must be H2 and
deeper, so adjust them accordingly.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David Kinder
2024-09-18 20:51:55 -04:00
committed by GitHub
parent ef90fbb761
commit f6ae4fa7db
3 changed files with 20 additions and 20 deletions

View File

@@ -6,9 +6,9 @@ This embedding microservice is designed to efficiently convert text into vectori
**Note** - The BridgeTower model implemented in Prediction Guard can actually embed text, images, or text + images (jointly). For now this service only embeds text, but a follow on contribution will enable the multimodal functionality.
# 🚀 Start Microservice with Docker
## 🚀 Start Microservice with Docker
## Setup Environment Variables
### Setup Environment Variables
Setup the following environment variables first
@@ -16,20 +16,20 @@ Setup the following environment variables first
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
```
## Build Docker Images
### Build Docker Images
```bash
cd ../../..
docker build -t opea/embedding-predictionguard:latest -f comps/embeddings/predictionguard/Dockerfile .
```
## Start Service
### Start Service
```bash
docker run -d --name="embedding-predictionguard" -p 6000:6000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/embedding-predictionguard:latest
```
# 🚀 Consume Embeddings Service
## 🚀 Consume Embeddings Service
```bash
curl localhost:6000/v1/embeddings \

View File

@@ -1,27 +1,27 @@
# Introduction
# Prediction Guard Introduction
[Prediction Guard](https://docs.predictionguard.com) allows you to utilize hosted open access LLMs, LVMs, and embedding functionality with seamlessly integrated safeguards. In addition to providing a scalable access to open models, Prediction Guard allows you to configure factual consistency checks, toxicity filters, PII filters, and prompt injection blocking. Join the [Prediction Guard Discord channel](https://discord.gg/TFHgnhAFKd) and request an API key to get started.
# Get Started
## Get Started
## Build Docker Image
### Build Docker Image
```bash
cd ../../..
docker build -t opea/llm-textgen-predictionguard:latest -f comps/llms/text-generation/predictionguard/Dockerfile .
```
## Run the Predictionguard Microservice
### Run the Predictionguard Microservice
```bash
docker run -d -p 9000:9000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY --name llm-textgen-predictionguard opea/llm-textgen-predictionguard:latest
```
# Consume the Prediction Guard Microservice
## Consume the Prediction Guard Microservice
See the [Prediction Guard docs](https://docs.predictionguard.com/) for available model options.
## Without streaming
### Without streaming
```bash
curl -X POST http://localhost:9000/v1/chat/completions \
@@ -37,7 +37,7 @@ curl -X POST http://localhost:9000/v1/chat/completions \
}'
```
## With streaming
### With streaming
```bash
curl -N -X POST http://localhost:9000/v1/chat/completions \

View File

@@ -4,23 +4,23 @@
Visual Question and Answering is one of the multimodal tasks empowered by LVMs (Large Visual Models). This microservice supports visual Q&A by using a LLaVA model available via the Prediction Guard API. It accepts two inputs: a prompt and an image. It outputs the answer to the prompt about the image.
# 🚀1. Start Microservice with Python
## 🚀1. Start Microservice with Python
## 1.1 Install Requirements
### 1.1 Install Requirements
```bash
pip install -r requirements.txt
```
## 1.2 Start LVM Service
### 1.2 Start LVM Service
```bash
python lvm.py
```
# 🚀2. Start Microservice with Docker (Option 2)
## 🚀2. Start Microservice with Docker (Option 2)
## 2.1 Setup Environment Variables
### 2.1 Setup Environment Variables
Setup the following environment variables first
@@ -28,20 +28,20 @@ Setup the following environment variables first
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
```
## 2.1 Build Docker Images
### 2.1 Build Docker Images
```bash
cd ../../..
docker build -t opea/lvm-predictionguard:latest -f comps/lvms/predictionguard/Dockerfile .
```
## 2.2 Start Service
### 2.2 Start Service
```bash
docker run -d --name="lvm-predictionguard" -p 9399:9399 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/lvm-predictionguard:latest
```
# 🚀3. Consume LVM Service
## 🚀3. Consume LVM Service
```bash
curl -X POST http://localhost:9399/v1/lvm \