AgentQnA example (#601)

* initial code and readme for hierarchical agent example

* agent test with openai llm passed

* update readme and add test

* update test

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change example name and update docker yaml

Signed-off-by: minmin-intel <minmin.hou@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change diagram name and test script name

Signed-off-by: minmin-intel <minmin.hou@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update test

---------

Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
minmin-intel
2024-08-21 07:10:22 -07:00
committed by GitHub
parent 46af6f3bc4
commit 67df2804de
9 changed files with 703 additions and 0 deletions

106
AgentQnA/README.md Normal file
View File

@@ -0,0 +1,106 @@
# Agents for Question Answering
## Overview
This example showcases a hierarchical multi-agent system for question-answering applications. The architecture diagram is shown below. The supervisor agent interfaces with the user and dispatch tasks to the worker agent and other tools to gather information and come up with answers. The worker agent uses the retrieval tool to generate answers to the queries posted by the supervisor agent. Other tools used by the supervisor agent may include APIs to interface knowledge graphs, SQL databases, external knowledge bases, etc.
![Architecture Overview](assets/agent_qna_arch.png)
### Why Agent for question answering?
1. Improve relevancy of retrieved context.
Agent can rephrase user queries, decompose user queries, and iterate to get the most relevant context for answering user's questions. Compared to conventional RAG, RAG agent can significantly improve the correctness and relevancy of the answer.
2. Use tools to get additional knowledge.
For example, knowledge graphs and SQL databases can be exposed as APIs for Agents to gather knowledge that may be missing in the retrieval vector database.
3. Hierarchical agent can further improve performance.
Expert worker agents, such as retrieval agent, knowledge graph agent, SQL agent, etc., can provide high-quality output for different aspects of a complex query, and the supervisor agent can aggregate the information together to provide a comprehensive answer.
### Roadmap
- v0.9: Worker agent uses open-source websearch tool (duckduckgo), agents use OpenAI GPT-4o-mini as llm backend.
- v1.0: Worker agent uses OPEA retrieval megaservice as tool.
- v1.0 or later: agents use open-source llm backend.
- v1.1 or later: add safeguards
## Getting started
1. Build agent docker image </br>
First, clone the opea GenAIComps repo
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIComps.git
```
Then build the agent docker image. Both the supervisor agent and the worker agent will use the same docker image, but when we launch the two agents we will specify different strategies and register different tools.
```
cd GenAIComps
docker build -t opea/comps-agent-langchain:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/langchain/docker/Dockerfile .
```
2. Launch tool services </br>
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
3. Set up environment for this example </br>
First, clone this repo
```
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
Second, set up env vars
```
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# optional: OPANAI_API_KEY
export OPENAI_API_KEY=<your-openai-key>
```
4. Launch agent services</br>
The configurations of the supervisor agent and the worker agent are defined in the docker-compose yaml file. We currently use openAI GPT-4o-mini as LLM, and we plan to add support for llama3.1-70B-instruct (served by TGI-Gaudi) in a subsequent release.
To use openai llm, run command below.
```
cd docker/openai/
bash launch_agent_service_openai.sh
```
## Validate services
First look at logs of the agent docker containers:
```
docker logs docgrader-agent-endpoint
```
```
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker agent:
```
curl http://${ip_address}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
Third, validate supervisor agent:
```
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/minmin-intel/GenAIComps/tree/agent-comp-dev/comps/agent/langchain#-4-provide-your-own-tools).

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

View File

@@ -0,0 +1,63 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
worker-docgrader-agent:
image: opea/comps-agent-langchain:latest
container_name: docgrader-agent-endpoint
volumes:
- ${WORKDIR}/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9095:9095"
ipc: host
environment:
ip_address: ${ip_address}
strategy: rag_agent
recursion_limit: ${recursion_limit}
llm_engine: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
model: ${model}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
supervisor-react-agent:
image: opea/comps-agent-langchain:latest
container_name: react-agent-endpoint
volumes:
- ${WORKDIR}/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9090:9090"
ipc: host
environment:
ip_address: ${ip_address}
strategy: react_langgraph
recursion_limit: ${recursion_limit}
llm_engine: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
model: ${model}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: ${streaming}
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: $CRAG_SERVER
WORKER_AGENT_URL: $WORKER_AGENT_URL
port: 9090

View File

@@ -0,0 +1,13 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
export ip_address=$(hostname -I | awk '{print $1}')
export recursion_limit=12
export model="gpt-4o-mini-2024-07-18"
export temperature=0
export max_new_tokens=512
export OPENAI_API_KEY=${OPENAI_API_KEY}
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export CRAG_SERVER=http://${ip_address}:8080
docker compose -f docker-compose-agent-openai.yaml up -d

View File

@@ -0,0 +1,75 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
echo "IMAGE_REPO=${IMAGE_REPO}"
echo "OPENAI_API_KEY=${OPENAI_API_KEY}"
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
function build_agent_docker_image() {
cd $WORKDIR
if [ ! -d "GenAIComps" ] ; then
git clone https://github.com/opea-project/GenAIComps.git
fi
cd GenAIComps
echo PWD: $(pwd)
docker build -t opea/comps-agent-langchain:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/langchain/docker/Dockerfile .
}
function start_services() {
echo "Starting CRAG server"
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker/openai
bash launch_agent_service_openai.sh
}
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function run_tests() {
echo "----------------Test supervisor agent ----------------"
local CONTENT=$(http_proxy="" curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}')
local EXIT_CODE=$(validate "$CONTENT" "Taylor" "react-agent-endpoint")
docker logs react-agent-endpoint
if [ "$EXIT_CODE" == "1" ]; then
exit 1
fi
}
function stop_services() {
echo "Stopping CRAG server"
docker stop $(docker ps -q --filter ancestor=docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0)
echo "Stopping Agent services"
docker stop $(docker ps -q --filter ancestor=opea/comps-agent-langchain:latest)
}
function main() {
build_agent_docker_image
start_services
run_tests
stop_services
}
main

330
AgentQnA/tools/pycragapi.py Normal file
View File

@@ -0,0 +1,330 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import json
import os
from typing import List
import requests
class CRAG(object):
"""A client for interacting with the CRAG server, offering methods to query various domains such as Open, Movie, Finance, Music, and Sports. Each method corresponds to an API endpoint on the CRAG server.
Attributes:
server (str): The base URL of the CRAG server. Defaults to "http://127.0.0.1:8080".
Methods:
open_search_entity_by_name(query: str) -> dict: Search for entities by name in the Open domain.
open_get_entity(entity: str) -> dict: Retrieve detailed information about an entity in the Open domain.
movie_get_person_info(person_name: str) -> dict: Get information about a person related to movies.
movie_get_movie_info(movie_name: str) -> dict: Get information about a movie.
movie_get_year_info(year: str) -> dict: Get information about movies released in a specific year.
movie_get_movie_info_by_id(movie_id: int) -> dict: Get movie information by its unique ID.
movie_get_person_info_by_id(person_id: int) -> dict: Get person information by their unique ID.
finance_get_company_name(query: str) -> dict: Search for company names in the finance domain.
finance_get_ticker_by_name(query: str) -> dict: Retrieve the ticker symbol for a given company name.
finance_get_price_history(ticker_name: str) -> dict: Get the price history for a given ticker symbol.
finance_get_detailed_price_history(ticker_name: str) -> dict: Get detailed price history for a ticker symbol.
finance_get_dividends_history(ticker_name: str) -> dict: Get dividend history for a ticker symbol.
finance_get_market_capitalization(ticker_name: str) -> dict: Retrieve market capitalization for a ticker symbol.
finance_get_eps(ticker_name: str) -> dict: Get earnings per share (EPS) for a ticker symbol.
finance_get_pe_ratio(ticker_name: str) -> dict: Get the price-to-earnings (PE) ratio for a ticker symbol.
finance_get_info(ticker_name: str) -> dict: Get financial information for a ticker symbol.
music_search_artist_entity_by_name(artist_name: str) -> dict: Search for music artists by name.
music_search_song_entity_by_name(song_name: str) -> dict: Search for songs by name.
music_get_billboard_rank_date(rank: int, date: str = None) -> dict: Get Billboard ranking for a specific rank and date.
music_get_billboard_attributes(date: str, attribute: str, song_name: str) -> dict: Get attributes of a song from Billboard rankings.
music_grammy_get_best_artist_by_year(year: int) -> dict: Get the Grammy Best New Artist for a specific year.
music_grammy_get_award_count_by_artist(artist_name: str) -> dict: Get the total Grammy awards won by an artist.
music_grammy_get_award_count_by_song(song_name: str) -> dict: Get the total Grammy awards won by a song.
music_grammy_get_best_song_by_year(year: int) -> dict: Get the Grammy Song of the Year for a specific year.
music_grammy_get_award_date_by_artist(artist_name: str) -> dict: Get the years an artist won a Grammy award.
music_grammy_get_best_album_by_year(year: int) -> dict: Get the Grammy Album of the Year for a specific year.
music_grammy_get_all_awarded_artists() -> dict: Get all artists awarded the Grammy Best New Artist.
music_get_artist_birth_place(artist_name: str) -> dict: Get the birthplace of an artist.
music_get_artist_birth_date(artist_name: str) -> dict: Get the birth date of an artist.
music_get_members(band_name: str) -> dict: Get the member list of a band.
music_get_lifespan(artist_name: str) -> dict: Get the lifespan of an artist.
music_get_song_author(song_name: str) -> dict: Get the author of a song.
music_get_song_release_country(song_name: str) -> dict: Get the release country of a song.
music_get_song_release_date(song_name: str) -> dict: Get the release date of a song.
music_get_artist_all_works(artist_name: str) -> dict: Get all works by an artist.
sports_soccer_get_games_on_date(team_name: str, date: str) -> dict: Get soccer games on a specific date.
sports_nba_get_games_on_date(team_name: str, date: str) -> dict: Get NBA games on a specific date.
sports_nba_get_play_by_play_data_by_game_ids(game_ids: List[str]) -> dict: Get NBA play by play data for a set of game ids.
Note:
Each method performs a POST request to the corresponding API endpoint and returns the response as a JSON dictionary.
"""
def __init__(self):
self.server = os.environ.get("CRAG_SERVER", "http://127.0.0.1:8080")
def open_search_entity_by_name(self, query: str):
url = self.server + "/open/search_entity_by_name"
headers = {"accept": "application/json"}
data = {"query": query}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def open_get_entity(self, entity: str):
url = self.server + "/open/get_entity"
headers = {"accept": "application/json"}
data = {"query": entity}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def movie_get_person_info(self, person_name: str):
url = self.server + "/movie/get_person_info"
headers = {"accept": "application/json"}
data = {"query": person_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def movie_get_movie_info(self, movie_name: str):
url = self.server + "/movie/get_movie_info"
headers = {"accept": "application/json"}
data = {"query": movie_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def movie_get_year_info(self, year: str):
url = self.server + "/movie/get_year_info"
headers = {"accept": "application/json"}
data = {"query": year}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def movie_get_movie_info_by_id(self, movid_id: int):
url = self.server + "/movie/get_movie_info_by_id"
headers = {"accept": "application/json"}
data = {"query": movid_id}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def movie_get_person_info_by_id(self, person_id: int):
url = self.server + "/movie/get_person_info_by_id"
headers = {"accept": "application/json"}
data = {"query": person_id}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_company_name(self, query: str):
url = self.server + "/finance/get_company_name"
headers = {"accept": "application/json"}
data = {"query": query}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_ticker_by_name(self, query: str):
url = self.server + "/finance/get_ticker_by_name"
headers = {"accept": "application/json"}
data = {"query": query}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_price_history(self, ticker_name: str):
url = self.server + "/finance/get_price_history"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_detailed_price_history(self, ticker_name: str):
url = self.server + "/finance/get_detailed_price_history"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_dividends_history(self, ticker_name: str):
url = self.server + "/finance/get_dividends_history"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_market_capitalization(self, ticker_name: str):
url = self.server + "/finance/get_market_capitalization"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_eps(self, ticker_name: str):
url = self.server + "/finance/get_eps"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_pe_ratio(self, ticker_name: str):
url = self.server + "/finance/get_pe_ratio"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def finance_get_info(self, ticker_name: str):
url = self.server + "/finance/get_info"
headers = {"accept": "application/json"}
data = {"query": ticker_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_search_artist_entity_by_name(self, artist_name: str):
url = self.server + "/music/search_artist_entity_by_name"
headers = {"accept": "application/json"}
data = {"query": artist_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_search_song_entity_by_name(self, song_name: str):
url = self.server + "/music/search_song_entity_by_name"
headers = {"accept": "application/json"}
data = {"query": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_billboard_rank_date(self, rank: int, date: str = None):
url = self.server + "/music/get_billboard_rank_date"
headers = {"accept": "application/json"}
data = {"rank": rank, "date": date}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_billboard_attributes(self, date: str, attribute: str, song_name: str):
url = self.server + "/music/get_billboard_attributes"
headers = {"accept": "application/json"}
data = {"date": date, "attribute": attribute, "song_name": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_best_artist_by_year(self, year: int):
url = self.server + "/music/grammy_get_best_artist_by_year"
headers = {"accept": "application/json"}
data = {"query": year}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_award_count_by_artist(self, artist_name: str):
url = self.server + "/music/grammy_get_award_count_by_artist"
headers = {"accept": "application/json"}
data = {"query": artist_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_award_count_by_song(self, song_name: str):
url = self.server + "/music/grammy_get_award_count_by_song"
headers = {"accept": "application/json"}
data = {"query": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_best_song_by_year(self, year: int):
url = self.server + "/music/grammy_get_best_song_by_year"
headers = {"accept": "application/json"}
data = {"query": year}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_award_date_by_artist(self, artist_name: str):
url = self.server + "/music/grammy_get_award_date_by_artist"
headers = {"accept": "application/json"}
data = {"query": artist_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_best_album_by_year(self, year: int):
url = self.server + "/music/grammy_get_best_album_by_year"
headers = {"accept": "application/json"}
data = {"query": year}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_grammy_get_all_awarded_artists(self):
url = self.server + "/music/grammy_get_all_awarded_artists"
headers = {"accept": "application/json"}
result = requests.post(url, headers=headers)
return json.loads(result.text)
def music_get_artist_birth_place(self, artist_name: str):
url = self.server + "/music/get_artist_birth_place"
headers = {"accept": "application/json"}
data = {"query": artist_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_artist_birth_date(self, artist_name: str):
url = self.server + "/music/get_artist_birth_date"
headers = {"accept": "application/json"}
data = {"query": artist_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_members(self, band_name: str):
url = self.server + "/music/get_members"
headers = {"accept": "application/json"}
data = {"query": band_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_lifespan(self, artist_name: str):
url = self.server + "/music/get_lifespan"
headers = {"accept": "application/json"}
data = {"query": artist_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_song_author(self, song_name: str):
url = self.server + "/music/get_song_author"
headers = {"accept": "application/json"}
data = {"query": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_song_release_country(self, song_name: str):
url = self.server + "/music/get_song_release_country"
headers = {"accept": "application/json"}
data = {"query": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_song_release_date(self, song_name: str):
url = self.server + "/music/get_song_release_date"
headers = {"accept": "application/json"}
data = {"query": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def music_get_artist_all_works(self, song_name: str):
url = self.server + "/music/get_artist_all_works"
headers = {"accept": "application/json"}
data = {"query": song_name}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def sports_soccer_get_games_on_date(self, date: str, team_name: str = None):
url = self.server + "/sports/soccer/get_games_on_date"
headers = {"accept": "application/json"}
data = {"team_name": team_name, "date": date}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def sports_nba_get_games_on_date(self, date: str, team_name: str = None):
url = self.server + "/sports/nba/get_games_on_date"
headers = {"accept": "application/json"}
data = {"team_name": team_name, "date": date}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)
def sports_nba_get_play_by_play_data_by_game_ids(self, game_ids: List[str]):
url = self.server + "/sports/nba/get_play_by_play_data_by_game_ids"
headers = {"accept": "application/json"}
data = {"game_ids": game_ids}
result = requests.post(url, json=data, headers=headers)
return json.loads(result.text)

View File

@@ -0,0 +1,59 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
search_knowledge_base:
description: Search knowledge base for a given query. Returns text related to the query.
callable_api: tools.py:search_knowledge_base
args_schema:
query:
type: str
description: query
return_output: retrieved_data
get_artist_birth_place:
description: Get the birth place of an artist.
callable_api: tools.py:get_artist_birth_place
args_schema:
artist_name:
type: str
description: artist name
return_output: birth_place
get_billboard_rank_date:
description: Get Billboard ranking for a specific rank and date.
callable_api: tools.py:get_billboard_rank_date
args_schema:
rank:
type: int
description: song name
date:
type: str
description: date
return_output: billboard_info
get_song_release_date:
description: Get the release date of a song.
callable_api: tools.py:get_song_release_date
args_schema:
song_name:
type: str
description: song name
return_output: release_date
get_members:
description: Get the member list of a band.
callable_api: tools.py:get_members
args_schema:
band_name:
type: str
description: band name
return_output: members
get_grammy_best_artist_by_year:
description: Get the Grammy Best New Artist for a specific year.
callable_api: tools.py:get_grammy_best_artist_by_year
args_schema:
year:
type: int
description: year
return_output: grammy_best_new_artist

52
AgentQnA/tools/tools.py Normal file
View File

@@ -0,0 +1,52 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import requests
from tools.pycragapi import CRAG
def search_knowledge_base(query: str) -> str:
"""Search the knowledge base for a specific query."""
# use worker agent (DocGrader) to search the knowledge base
url = os.environ.get("WORKER_AGENT_URL")
print(url)
proxies = {"http": ""}
payload = {
"query": query,
}
response = requests.post(url, json=payload, proxies=proxies)
return response.json()["text"]
def get_grammy_best_artist_by_year(year: int) -> dict:
"""Get the Grammy Best New Artist for a specific year."""
api = CRAG()
year = int(year)
return api.music_grammy_get_best_artist_by_year(year)
def get_members(band_name: str) -> dict:
"""Get the member list of a band."""
api = CRAG()
return api.music_get_members(band_name)
def get_artist_birth_place(artist_name: str) -> dict:
"""Get the birthplace of an artist."""
api = CRAG()
return api.music_get_artist_birth_place(artist_name)
def get_billboard_rank_date(rank: int, date: str = None) -> dict:
"""Get Billboard ranking for a specific rank and date."""
api = CRAG()
rank = int(rank)
return api.music_get_billboard_rank_date(rank, date)
def get_song_release_date(song_name: str) -> dict:
"""Get the release date of a song."""
api = CRAG()
return api.music_get_song_release_date(song_name)

View File

@@ -0,0 +1,5 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
duckduckgo_search:
callable_api: ddg-search