Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TypeError: Network.show() got an unexpected keyword argument 'notebook' #16966

Closed
xucailiang opened this issue Nov 15, 2024 · 2 comments
Closed
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@xucailiang
Copy link

xucailiang commented Nov 15, 2024

Bug Description

After I tried to build a workflow, I wanted to use draw_all_possible_flows to view my workflow, and then an error occurred

Version

llama-index-core==0.11.23, pyvis==0.3.1, numpy==1.26.4, llama-index-agent-openai==0.3.4, python==3.11

Steps to Reproduce

The following is a minimal reproducible script:

import asyncio

from llama_index.core.workflow import Event
from llama_index.core.schema import NodeWithScore
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.response_synthesizers import CompactAndRefine
from llama_index.core.postprocessor.llm_rerank import LLMRerank
from llama_index.core.workflow import (
    Context,
    Workflow,
    StartEvent,
    StopEvent,
    step,
)
from llama_index.llms.openai import OpenAI

class RetrieverEvent(Event):
    nodes: list[NodeWithScore]

class RerankEvent(Event):
    nodes: list[NodeWithScore]

class RAGWorkflow(Workflow):
    @step
    async def indexing(self, ctx: Context, ev: StartEvent) -> StopEvent | None:
        if not ev.get("query"):
            documents = SimpleDirectoryReader(input_files=['../../data/sales_tips1.txt']).load_data()
            index = VectorStoreIndex.from_documents(documents=documents)
            print("Indexing complete")
            return StopEvent(result=index)
        else:
            return None

    @step
    async def retrieve(
            self, ctx: Context, ev: StartEvent) -> RetrieverEvent | None:
        if not ev.get("query"):
            return None

        query = ev.get("query")
        index = ev.get("index")

        await ctx.set("query", query)
        retriever = index.as_retriever(similarity_top_k=2)
        nodes = await retriever.aretrieve(query)
        return RetrieverEvent(nodes=nodes)

    @step
    async def rerank(self, ctx: Context, ev: RetrieverEvent) -> RerankEvent:
        ranker = LLMRerank(
            choice_batch_size=5, top_n=3, llm=OpenAI(model="gpt-4o-mini")
        )
        new_nodes = ranker.postprocess_nodes(
            ev.nodes, query_str=await ctx.get("query", default=None)
        )
        return RerankEvent(nodes=new_nodes)

    @step
    async def generate(self, ctx: Context, ev: RerankEvent) -> StopEvent:
        llm = OpenAI(model="gpt-4o-mini")
        summarizer = CompactAndRefine(llm=llm, streaming=True, verbose=True)
        query = await ctx.get("query", default=None)

        response = await summarizer.asynthesize(query, nodes=ev.nodes)
        return StopEvent(result=response)

w = RAGWorkflow()

async def main():
    from llama_index.core.workflow import draw_all_possible_flows
    draw_all_possible_flows(
        workflow=RAGWorkflow, filename="RAG.html"
    )

if __name__ == '__main__':
    asyncio.run(main())

Relevant Logs/Tracbacks

Traceback (most recent call last):
  File "/Users/liang/data/Py_Project/testProject/t_llamaindex.py", line 95, in <module>
    asyncio.run(main())
  File "/Users/liang/anaconda3/envs/testProject/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/Users/liang/anaconda3/envs/testProject/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/liang/anaconda3/envs/testProject/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/liang/data/Py_Project/testProject/t_llamaindex.py", line 90, in main
    draw_all_possible_flows(
  File "/Users/liang/anaconda3/envs/testProject/lib/python3.11/site-packages/deprecated/classic.py", line 285, in wrapper_function
    return wrapped_(*args_, **kwargs_)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/liang/anaconda3/envs/testProject/lib/python3.11/site-packages/llama_index/core/workflow/drawing.py", line 71, in draw_all_possible_flows
    net.show(filename, notebook=notebook)
TypeError: Network.show() got an unexpected keyword argument 'notebook'
@xucailiang xucailiang added bug Something isn't working triage Issue needs to be triaged/prioritized labels Nov 15, 2024
Copy link

dosubot bot commented Nov 15, 2024

Hello, @xucailiang! I'm here to help you with any bugs, questions, or contributions you have. Let's squash those bugs together and make the code even better!

The error you're encountering is due to the Network.show() method in the version of pyvis you're using (0.3.1) not accepting the notebook argument. The notebook parameter is expected in the Network.show() method for the version of pyvis that is compatible with llama-index-core 0.11.23, which is pyvis version ^0.3.2 [1].

To resolve this issue, you should update your pyvis package to version 0.3.2 or later. You can do this by running the following command:

pip install pyvis==0.3.2

This should ensure compatibility and prevent the TypeError from occurring.

To continue talking to Dosu, mention @dosu.

@xucailiang
Copy link
Author

Yes, upgrading pyvis to version 0.3.2 can solve the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

1 participant