Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: COG-1523 add top_k in run_question_answering #625

Merged
merged 5 commits into from
Mar 10, 2025

Conversation

lxobr
Copy link
Collaborator

@lxobr lxobr commented Mar 10, 2025

Description

  • Expose top_k as an optional argument of run_question_answering
  • Update retrievers to handle the parameters

DCO Affirmation

I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin

Summary by CodeRabbit

  • New Features
    • Enhanced answer generation and document retrieval capabilities by introducing an optional parameter that allows users to specify the number of top results. This improvement adds flexibility when retrieving question responses and associated context, adapting the output based on user preference.

@lxobr lxobr self-assigned this Mar 10, 2025
@lxobr lxobr requested a review from hajdul88 March 10, 2025 09:41
Copy link

gitguardian bot commented Mar 10, 2025

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
9573981 Triggered Generic Password 17e46f1 helm/docker-compose-helm.yml View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Copy link
Contributor

coderabbitai bot commented Mar 10, 2025

Walkthrough

The changes update several modules to support an optional top_k parameter. In the answer generation module, the function signature has been modified to pass the parameter to a retriever. Similarly, three retrieval modules now accept a top_k parameter—either adding it or updating its type—to control the number of document chunks retrieved. Default values vary among the modules. Additionally, an import for Optional has been added where necessary.

Changes

File(s) Change Summary
cognee/eval_framework/answer_generation/.../run_question_answering_module.py Added an optional top_k: Optional[int] = None parameter to the run_question_answering function and imported Optional from typing.
cognee/modules/retrieval/completion_retriever.py Introduced an optional top_k parameter (default=1) in the constructor and updated get_context to use top_k for determining the number of document chunks.
cognee/modules/retrieval/graph_completion_retriever.py
cognee/modules/retrieval/graph_summary_completion_retriever.py
Updated the top_k parameter type from int to Optional[int] in the constructors. In GraphCompletionRetriever, added a conditional assignment to default to 5 when top_k is None.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant QA as run_question_answering
    participant R as Retriever
    U->>QA: Submit question (optionally with top_k)
    QA->>R: Call retriever with system_prompt and top_k
    R-->>QA: Return top_k document chunks
    QA->>U: Provide answer based on retrieved context
Loading

Poem

I'm hopping through code with a skip and a twirl,
Adding top_k to make our logic unfurl.
From answer modules to retrieval's bright art,
Each change beats like a rhythmic heart.
Carrots and code, we celebrate in pure glee,
A rabbit's joy in every line you see!
🥕🐇 Happy coding, from me to thee!

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 56427f2 and 17e46f1.

📒 Files selected for processing (4)
  • cognee/eval_framework/answer_generation/run_question_answering_module.py (3 hunks)
  • cognee/modules/retrieval/completion_retriever.py (1 hunks)
  • cognee/modules/retrieval/graph_completion_retriever.py (1 hunks)
  • cognee/modules/retrieval/graph_summary_completion_retriever.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: lint (ubuntu-latest, 3.10.x)
  • GitHub Check: docker-compose-test
🔇 Additional comments (7)
cognee/modules/retrieval/graph_completion_retriever.py (2)

18-18: Parameter type modification looks good

The change from int to Optional[int] for the top_k parameter adds flexibility by allowing users to pass None value.


23-23: Proper fallback handling for None values

Good implementation of the fallback logic to maintain the default behavior when None is passed.

cognee/modules/retrieval/graph_summary_completion_retriever.py (1)

15-15: Consistent parameter type update

The change to Optional[int] aligns with the parent class implementation and maintains consistency across retrievers.

cognee/modules/retrieval/completion_retriever.py (1)

16-16: Parameter addition looks good

Adding the top_k parameter with a default value of 1 is consistent with the PR objectives and the proper fallback logic is implemented.

Also applies to: 21-21

cognee/eval_framework/answer_generation/run_question_answering_module.py (3)

3-3: Import added correctly

The addition of Optional from the typing module is necessary for the new parameter type hint.


35-35: Parameter addition aligns with PR objectives

The addition of the top_k parameter with a default of None is consistent with the PR objectives and allows flexibility for users.


51-53: Passing top_k to retrievers

The top_k parameter is correctly passed to the retriever, which aligns with the PR objectives.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
cognee/modules/retrieval/completion_retriever.py (1)

31-42: 💡 Verification agent

🧩 Analysis chain

Consider updating get_completion method to handle multiple context items.

The get_completion method doesn't seem to be updated to handle the case where multiple context items are returned from get_context when top_k > 1. This could lead to issues if the context type changes from a string to a list.

You might need to update the get_completion method to handle the different context structure when multiple items are returned:


🏁 Script executed:

#!/bin/bash
# Check if generate_completion can handle a list of contexts
rg -A 5 -B 5 "generate_completion" --glob "*.py" | grep -B 3 -A 3 "context"

Length of output: 4824


Action Required: Update Context Handling in get_completion

The current implementation in cognee/modules/retrieval/completion_retriever.py always passes the value returned by get_context(query) directly to generate_completion. However, when top_k > 1, get_context may return multiple context items (i.e. a list) while generate_completion in cognee/modules/retrieval/utils/completion.py expects a string. This mismatch could lead to runtime issues if multiple context items are returned.

  • Issue Location: cognee/modules/retrieval/completion_retriever.py (lines 31-42)
  • Recommendation:
    • Detect if context is a list after calling await self.get_context(query).
    • Aggregate or convert the list into a single string (e.g., using " ".join(context)) before passing it to generate_completion.
    • Alternatively, if separate completions are desired per context item, consider iterating over the list and handling the responses accordingly.

Please update the method to ensure compatibility with both single and multiple context items.

♻️ Duplicate comments (1)
cognee/modules/retrieval/completion_retriever.py (1)

26-29: ⚠️ Potential issue

Inconsistent handling of multiple results with top_k parameter.

The method now retrieves multiple chunks based on self.top_k, but still only returns the first chunk's text regardless of how many are retrieved. This doesn't fully implement the behavior expected from a top_k parameter.

Apply this fix to handle multiple chunks consistently:

async def get_context(self, query: str) -> Any:
    """Retrieves relevant document chunks as context."""
    vector_engine = get_vector_engine()
    found_chunks = await vector_engine.search("DocumentChunk_text", query, limit=self.top_k)
    if len(found_chunks) == 0:
        raise NoRelevantDataFound
-   return found_chunks[0].payload["text"]
+   if self.top_k == 1:
+       return found_chunks[0].payload["text"]
+   else:
+       return [chunk.payload["text"] for chunk in found_chunks]
🧹 Nitpick comments (1)
cognee/modules/retrieval/completion_retriever.py (1)

21-21: Consider removing redundant default value check.

The top_k parameter already has a default value of 1 in the method signature. This assignment adds redundancy unless there's a specific reason to have this double-check.

- self.top_k = top_k if top_k is not None else 1
+ self.top_k = top_k
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17e46f1 and 466b94b.

📒 Files selected for processing (1)
  • cognee/modules/retrieval/completion_retriever.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (30)
  • GitHub Check: Test on macos-15
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: Test on macos-13
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: Test on macos-13
  • GitHub Check: windows-latest
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: run_simple_example_test
  • GitHub Check: docker-compose-test
  • GitHub Check: test
  • GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
cognee/modules/retrieval/completion_retriever.py (1)

16-16: Good addition of the top_k parameter.

Adding the optional top_k parameter to the CompletionRetriever class aligns well with the PR objectives of enhancing the question answering functionality.

@lxobr lxobr merged commit ac01565 into dev Mar 10, 2025
36 of 38 checks passed
@lxobr lxobr deleted the feat/COG-1523-top-k-in-eval branch March 10, 2025 09:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants