models¶
This module defines the lightweight dataclasses returned by the evaluation pipeline.
ConfusionCounts¶
Fields:
true_positive_countfalse_positive_counttrue_negative_countfalse_negative_count
Used internally by the metric helpers.
ResultBundle¶
Fields:
per_feature_metrics_data_frametotal_metrics_data_framerow_accuracy_valueentity_detection_summary=Nonematched_pairs_data_frame=None
Notes:
- returned by
evaluate() entity_detection_summaryis normally populated only forMULTI_ENTITYmatched_pairs_data_framecurrently contains pair indices and similarity scores for multi-entity runs
RunContext¶
Fields:
run_identifierstarted_at_timestampconfiguration_hash
Used mainly for logging and run traceability.
Generated API details¶
ConfusionCounts
dataclass
¶
Container of basic confusion counts.
Source code in src/extraction_testing/models.py
21 22 23 24 25 26 27 | |
ResultBundle
dataclass
¶
Container for test results and artifacts.
Source code in src/extraction_testing/models.py
30 31 32 33 34 35 36 37 | |
RunContext
dataclass
¶
Contextual information for a single run.
Source code in src/extraction_testing/models.py
40 41 42 43 44 45 | |