Files
GenAIExamples/comps/knowledgegraphs/langchain/knowledge_graph.py
XinyaoWa 4c0afd05a7 Add knowledge graph components (#171)
* enable ragas (#129)

Signed-off-by: XuhuiRen <xuhui.ren@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Fix RAG performance issues (#132)

* Fix RAG performance issues

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* add microservice level perf statistics (#135)

* add statistics

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Add More Contents to the Table of MicroService (#141)

* Add More Contents to the Table MicroService

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* reorder

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* Update README.md

* refine structure

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix model

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* refine table

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* put llm to the ground

Signed-off-by: zehao-intel <zehao.huang@intel.com>

---------

Signed-off-by: zehao-intel <zehao.huang@intel.com>
Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Use common security content for OPEA projects (#151)

* add python coverage

Signed-off-by: chensuyue <suyue.chen@intel.com>

* docs update

Signed-off-by: chensuyue <suyue.chen@intel.com>

* Revert "add python coverage"

This reverts commit 69615b16c8e7483f9fea742d1d3fa0707075a394.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Enable vLLM Gaudi support for LLM service based on officially habana vllm release (#137)

Signed-off-by: tianyil1 <tianyi.liu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* add knowledge graph

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* knowledge graph microservice update

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Support Dataprep Microservice with Llama Index (#154)

* move file to langchain folder

Signed-off-by: letonghan <letong.han@intel.com>

* support dataprep with llama_index

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add e2e test script

Signed-off-by: letonghan <letong.han@intel.com>

* update test script name

Signed-off-by: letonghan <letong.han@intel.com>

---------

Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Support Embedding Microservice with Llama Index (#150)

* fix stream=false doesn't work issue

Signed-off-by: letonghan <letong.han@intel.com>

* support embedding comp with llama_index

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add More Contents to the Table of MicroService (#141)

* Add More Contents to the Table MicroService

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* reorder

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* Update README.md

* refine structure

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix model

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* refine table

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* put llm to the ground

Signed-off-by: zehao-intel <zehao.huang@intel.com>

---------

Signed-off-by: zehao-intel <zehao.huang@intel.com>
Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Use common security content for OPEA projects (#151)

* add python coverage

Signed-off-by: chensuyue <suyue.chen@intel.com>

* docs update

Signed-off-by: chensuyue <suyue.chen@intel.com>

* Revert "add python coverage"

This reverts commit 69615b16c8e7483f9fea742d1d3fa0707075a394.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Enable vLLM Gaudi support for LLM service based on officially habana vllm release (#137)

Signed-off-by: tianyil1 <tianyi.liu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* support embedding comp with llama_index

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add test script for embedding llama_inde

Signed-off-by: letonghan <letong.han@intel.com>

* remove conflict requirements

Signed-off-by: letonghan <letong.han@intel.com>

* update test script

Signed-off-by: letonghan <letong.han@intel.com>

* udpate

Signed-off-by: letonghan <letong.han@intel.com>

* update

Signed-off-by: letonghan <letong.han@intel.com>

* update

Signed-off-by: letonghan <letong.han@intel.com>

* fix ut issue

Signed-off-by: letonghan <letong.han@intel.com>

---------

Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: zehao-intel <zehao.huang@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: tianyil1 <tianyi.liu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: zehao-intel <zehao.huang@intel.com>
Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: Tianyi Liu <tianyi.liu@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Support Ollama microservice (#142)

* Add Ollama Support

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Fix dataprep microservice path issue (#163)

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* update CI to support dataprep_redis path level change (#155)

Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Add Gateway for Translation (#169)

* add translation gateway

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* fix import

Signed-off-by: zehao-intel <zehao.huang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: zehao-intel <zehao.huang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Update LLM readme (#172)

* Update LLM readme

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* update readme

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* update tgi readme

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* rollback requirements.txt

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* add milvus microservice (#158)

* Use common security content for OPEA projects (#151)

* add python coverage

Signed-off-by: chensuyue <suyue.chen@intel.com>

* docs update

Signed-off-by: chensuyue <suyue.chen@intel.com>

* Revert "add python coverage"

This reverts commit 69615b16c8e7483f9fea742d1d3fa0707075a394.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: jinjunzh <jasper.zhu@intel.com>

* add milvus microservice

Signed-off-by: jinjunzh <jasper.zhu@intel.com>

* fix the typo

Signed-off-by: jinjunzh <jasper.zhu@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: jinjunzh <jasper.zhu@intel.com>

---------

Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: jinjunzh <jasper.zhu@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* enable python coverage (#149)

Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Add Ray version for multi file process (#119)

* add ray version document to redis

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* update test

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* Add test

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add TIMEOUT in container environment and return status

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* rebase on new folder layout

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Add codecov (#178)

Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Rename lm-eval folder to utils/lm-eval (#179)

Signed-off-by: changwangss <chang1.wang@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Support rerank and retrieval of RAG OPT (#164)

* supported bce model for rerank.

Signed-off-by: Xinyu Ye <xinyu.ye@intel.com>

* change folder

Signed-off-by: Xinyu Ye <xinyu.ye@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change path in test file.

Signed-off-by: Xinyu Ye <xinyu.ye@intel.com>

---------

Signed-off-by: Xinyu Ye <xinyu.ye@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Support vLLM XFT LLM microservice (#174)

* Support vLLM XFT serving

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix access vllm issue

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add permission for run.sh

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* add readme

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix proxy issue

Signed-off-by: lvliang-intel <liang1.lv@intel.com>

---------

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Update Dataprep Microservice README (#173)

* update dataprep readme

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* fixed milvus port conflict issues during deployment (#183)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed milvus port conflict issues during deployment

* align port for unified retrieval microservice

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* remove

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* remove hard address

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update readme

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* add example data and ingestion

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* fix typ

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix dataprep timeout issue (#203)

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* Add a new embedding MosecEmbedding (#182)

* Add a new embedding MosecEmbedding.

Signed-off-by: Jincheng Miao <jincheng.miao@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Jincheng Miao <jincheng.miao@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* expand timeout for microservice test (#208)

Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* fix typo

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* fix requirement

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: XuhuiRen <xuhui.ren@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: zehao-intel <zehao.huang@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: tianyil1 <tianyi.liu@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: jinjunzh <jasper.zhu@intel.com>
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Signed-off-by: changwangss <chang1.wang@intel.com>
Signed-off-by: Xinyu Ye <xinyu.ye@intel.com>
Signed-off-by: Jincheng Miao <jincheng.miao@intel.com>
Co-authored-by: XuhuiRen <44249229+XuhuiRen@users.noreply.github.com>
Co-authored-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: zehao-intel <zehao.huang@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: Tianyi Liu <tianyi.liu@intel.com>
Co-authored-by: Letong Han <106566639+letonghan@users.noreply.github.com>
Co-authored-by: jasperzhu <jasper.zhu@intel.com>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Co-authored-by: Sun, Xuehao <xuehao.sun@intel.com>
Co-authored-by: Wang, Chang <491521017@qq.com>
Co-authored-by: XinyuYe-Intel <xinyu.ye@intel.com>
Co-authored-by: Jincheng Miao <jincheng.miao@intel.com>
2024-06-20 17:16:01 +08:00

163 lines
5.7 KiB
Python
Executable File

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import pathlib
import sys
cur_path = pathlib.Path(__file__).parent.resolve()
comps_path = os.path.join(cur_path, "../../../")
sys.path.append(comps_path)
import json
import requests
from langchain import hub
from langchain.agents import AgentExecutor, Tool, load_tools
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActJsonSingleInputOutputParser
from langchain.chains import GraphCypherQAChain, RetrievalQA
from langchain.tools.render import render_text_description
from langchain_community.chat_models.huggingface import ChatHuggingFace
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.graphs import Neo4jGraph
from langchain_community.llms import HuggingFaceEndpoint
from langchain_community.vectorstores.neo4j_vector import Neo4jVector
from langsmith import traceable
from comps import GeneratedDoc, GraphDoc, ServiceType, opea_microservices, register_microservice
def get_retriever(input, neo4j_endpoint, neo4j_username, neo4j_password, llm):
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
vector_index = Neo4jVector.from_existing_graph(
embeddings,
url=neo4j_endpoint,
username=neo4j_username,
password=neo4j_password,
index_name=input.rag_index_name,
node_label=input.rag_node_label,
text_node_properties=input.rag_text_node_properties,
embedding_node_property=input.rag_embedding_node_property,
)
vector_qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vector_index.as_retriever())
return vector_qa
def get_cypherchain(graph, cypher_llm, qa_llm):
graph.refresh_schema()
cypher_chain = GraphCypherQAChain.from_llm(cypher_llm=cypher_llm, qa_llm=qa_llm, graph=graph, verbose=True)
return cypher_chain
def get_agent(vector_qa, cypher_chain, llm_repo_id):
# define two tools
tools = [
Tool(
name="Tasks",
func=vector_qa.invoke,
description="""Useful when you need to answer questions about descriptions of tasks.
Not useful for counting the number of tasks.
Use full question as input.
""",
),
Tool(
name="Graph",
func=cypher_chain.invoke,
description="""Useful when you need to answer questions about microservices,
their dependencies or assigned people. Also useful for any sort of
aggregation like counting the number of tasks, etc.
Use full question as input.
""",
),
]
# setup ReAct style prompt
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# define chat model
llm = HuggingFaceEndpoint(repo_id=llm_repo_id, max_new_tokens=512)
chat_model = ChatHuggingFace(llm=llm)
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
# define agent
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
return agent_executor
@register_microservice(
name="opea_service@knowledge_graph",
endpoint="/v1/graphs",
host="0.0.0.0",
port=8060,
)
def graph_query(input: GraphDoc) -> GeneratedDoc:
print(input)
## Connect to Neo4j
neo4j_endpoint = os.getenv("NEO4J_ENDPOINT", "neo4j://localhost:7687")
neo4j_username = os.getenv("NEO4J_USERNAME", "neo4j")
neo4j_password = os.getenv("NEO4J_PASSWORD", "neo4j")
graph = Neo4jGraph(url=neo4j_endpoint, username=neo4j_username, password=neo4j_password)
## keep for multiple tests, will remove later
graph.query("MATCH (n) DETACH DELETE n")
import_query = json.load(open("data/microservices.json", "r"))["query"]
graph.query(import_query)
## get tool flag
flag_agent = True if input.strtype == "query" else False
flag_rag = True if input.strtype in ["query", "rag"] else False
## define LLM
if flag_agent or flag_rag:
llm_endpoint = os.getenv("LLM_ENDPOINT", "http://localhost:8080")
llm = HuggingFaceEndpoint(
endpoint_url=llm_endpoint,
timeout=600,
max_new_tokens=input.max_new_tokens,
)
## define a retriever
if flag_rag:
vector_qa = get_retriever(input, neo4j_endpoint, neo4j_username, neo4j_password, llm)
## define an agent
if flag_agent:
llm_repo_id = os.getenv("AGENT_LLM", "HuggingFaceH4/zephyr-7b-beta")
cypher_chain = get_cypherchain(graph, llm, llm) # define a cypher generator
agent_executor = get_agent(vector_qa, cypher_chain, llm_repo_id)
## process input query
if input.strtype == "cypher":
result_dicts = graph.query(input.text)
result = ""
for result_dict in result_dicts:
for key in result_dict:
result += str(key) + ": " + str(result_dict[key])
elif input.strtype == "rag":
result = vector_qa.invoke(input.text)["result"]
elif input.strtype == "query":
result = agent_executor.invoke({"input": input.text})["output"]
else:
result = "Please specify strtype as one of cypher, rag, query."
return GeneratedDoc(text=result, prompt=input.text)
if __name__ == "__main__":
opea_microservices["opea_service@knowledge_graph"].start()