-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test: Parse context pieces separately in MusiqueQAAdapter and adjust tests [cog-1234] #561
Conversation
WalkthroughThe changes involve updates to the test cases in the benchmark adapters test file and modifications to the Changes
Sequence Diagram(s)sequenceDiagram
participant C as Caller
participant A as MusiqueQAAdapter
participant D as Dataset
C->>A: Call load_corpus(limit)
A->>D: Load dataset items
loop For each item in dataset
A->>A: Iterate over paragraphs
A->>A: Append each paragraph text individually to corpus_list
end
A->>C: Return corpus_list and qa_pairs
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
912fd88
to
ef83afc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
evals/eval_framework/benchmark_adapters/musique_adapter.py (1)
67-68
: LGTM! Consider documenting the change in behavior.The change to process each paragraph separately aligns with the PR objectives and provides more granular context pieces. This is a good improvement as it allows for more flexible processing downstream.
Consider updating the docstring to clarify that
corpus_list
now contains individual paragraphs rather than concatenated texts. Apply this diff:def load_corpus( self, limit: Optional[int] = None, seed: int = 42, auto_download: bool = True, ) -> tuple[list[str], list[dict[str, Any]]]: """ Loads the Musique QA dataset. :param limit: If set, randomly sample 'limit' items. :param seed: Random seed for sampling. :param auto_download: If True, attempt to download + unzip the dataset from Google Drive if the .jsonl file is not present locally. + :return: A tuple containing: + - corpus_list: List of paragraph texts, where each paragraph is a separate entry + - question_answer_pairs: List of dictionaries containing question-answer pairs """
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cognee/tests/unit/eval_framework/benchmark_adapters_test.py
(1 hunks)evals/eval_framework/benchmark_adapters/musique_adapter.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (27)
- GitHub Check: run_notebook_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: docker-compose-test
🔇 Additional comments (1)
cognee/tests/unit/eval_framework/benchmark_adapters_test.py (1)
67-67
: LGTM! The test changes align with the PR objectives.The removal of assertions about
corpus_list
length makes sense since we're now parsing context pieces separately in MusiqueQAAdapter, which meanscorpus_list
can have a different length thanqa_pairs
. The remaining assertions still ensure the core functionality and data structure requirements are met.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (2)
evals/eval_framework/benchmark_adapters/twowikimultihop_adapter.py (1)
20-29
: 🛠️ Refactor suggestionAdd error handling for file operations and HTTP requests.
The current implementation could fail in several scenarios:
- File operations might fail due to permissions or disk space
- HTTP request might fail (only status code is checked)
- JSON parsing might fail due to malformed data
Consider wrapping the operations in try-except blocks:
if os.path.exists(filename): - with open(filename, "r", encoding="utf-8") as f: - corpus_json = json.load(f) + try: + with open(filename, "r", encoding="utf-8") as f: + corpus_json = json.load(f) + except (IOError, json.JSONDecodeError) as e: + raise RuntimeError(f"Failed to load corpus from {filename}: {str(e)}") else: - response = requests.get(self.dataset_info["URL"]) - response.raise_for_status() - corpus_json = response.json() + try: + response = requests.get(self.dataset_info["URL"], timeout=30) + response.raise_for_status() + corpus_json = response.json() + except (requests.RequestException, json.JSONDecodeError) as e: + raise RuntimeError(f"Failed to fetch corpus from {self.dataset_info['URL']}: {str(e)}") - with open(filename, "w", encoding="utf-8") as f: - json.dump(corpus_json, f, ensure_ascii=False, indent=4) + try: + with open(filename, "w", encoding="utf-8") as f: + json.dump(corpus_json, f, ensure_ascii=False, indent=4) + except IOError as e: + raise RuntimeError(f"Failed to save corpus to {filename}: {str(e)}")evals/eval_framework/benchmark_adapters/hotpot_qa_adapter.py (1)
22-31
: 🛠️ Refactor suggestionAdd error handling for file operations and HTTP requests.
The current implementation could fail in several scenarios:
- File operations might fail due to permissions or disk space
- HTTP request might fail (only status code is checked)
- JSON parsing might fail due to malformed data
Consider wrapping the operations in try-except blocks:
if os.path.exists(filename): - with open(filename, "r", encoding="utf-8") as f: - corpus_json = json.load(f) + try: + with open(filename, "r", encoding="utf-8") as f: + corpus_json = json.load(f) + except (IOError, json.JSONDecodeError) as e: + raise RuntimeError(f"Failed to load corpus from {filename}: {str(e)}") else: - response = requests.get(self.dataset_info["url"]) - response.raise_for_status() - corpus_json = response.json() + try: + response = requests.get(self.dataset_info["url"], timeout=30) + response.raise_for_status() + corpus_json = response.json() + except (requests.RequestException, json.JSONDecodeError) as e: + raise RuntimeError(f"Failed to fetch corpus from {self.dataset_info['url']}: {str(e)}") - with open(filename, "w", encoding="utf-8") as f: - json.dump(corpus_json, f, ensure_ascii=False, indent=4) + try: + with open(filename, "w", encoding="utf-8") as f: + json.dump(corpus_json, f, ensure_ascii=False, indent=4) + except IOError as e: + raise RuntimeError(f"Failed to save corpus to {filename}: {str(e)}")
🧹 Nitpick comments (1)
evals/eval_framework/benchmark_adapters/hotpot_qa_adapter.py (1)
10-15
: Address TODO comments in dataset_info.The dataset_info dictionary contains TODO comments about deleting files after changing URLs. These comments should be addressed.
Would you like me to help create an issue to track the cleanup of these TODO comments?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
evals/eval_framework/benchmark_adapters/dummy_adapter.py
(1 hunks)evals/eval_framework/benchmark_adapters/hotpot_qa_adapter.py
(2 hunks)evals/eval_framework/benchmark_adapters/musique_adapter.py
(2 hunks)evals/eval_framework/benchmark_adapters/twowikimultihop_adapter.py
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- evals/eval_framework/benchmark_adapters/musique_adapter.py
⏰ Context from checks skipped due to timeout of 90000ms (28)
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: windows-latest
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-13
- GitHub Check: run_eval_framework_test / test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: docker-compose-test
🔇 Additional comments (3)
evals/eval_framework/benchmark_adapters/dummy_adapter.py (1)
1-1
: LGTM! Type annotations simplified without affecting functionality.The changes simplify the type annotations by using just
str
instead ofUnion[LiteralString, str]
, which is appropriate since the implementation only returns regular strings.Also applies to: 9-9
evals/eval_framework/benchmark_adapters/twowikimultihop_adapter.py (1)
5-5
: LGTM! Type annotations simplified without affecting functionality.The changes simplify the type annotations by using just
str
instead ofUnion[LiteralString, str]
, which is appropriate since the implementation only returns regular strings.Also applies to: 17-17
evals/eval_framework/benchmark_adapters/hotpot_qa_adapter.py (1)
5-5
: LGTM! Type annotations simplified without affecting functionality.The changes simplify the type annotations by using just
str
instead ofUnion[LiteralString, str]
, which is appropriate since the implementation only returns regular strings.Also applies to: 19-19
@@ -47,7 +47,7 @@ jobs: | |||
installer-parallel: true | |||
|
|||
- name: Install dependencies | |||
run: poetry install --no-interaction -E docs | |||
run: poetry install --no-interaction -E docs -E evals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we install evals
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we need the gdown library for testing the Musique adapter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we test it in the regular cognify pipeline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested it I think the article handling is correct now in musique
Description
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin
Summary by CodeRabbit
Tests
corpus_list
andqa_pairs
, now focusing solely onqa_pairs
limits.Refactor
corpus_list
, enhancing clarity in data structure.load_corpus
method across multiple adapters, ensuring consistency in return types.Chores