BNMPy.result_evaluation¶
The result_evaluation module provides tools for evaluating optimization results.
- BNMPy.result_evaluation.get_pbn_rules_string(pbn) str[source]¶
Format the final PBN into a readable string.
- class BNMPy.result_evaluation.ResultEvaluator(optimizer_result, parameter_optimizer)[source]¶
Bases:
objectEvaluate optimization results by comparing simulation output with experimental data.
This provides tools to assess the quality of optimized models by: 1. Simulating the optimized model on experimental conditions 2. Comparing simulation results with experimental measurements 3. Calculating correlation and other statistical metrics 4. Generating visualization plots
Methods
Calculate evaluation metrics comparing simulation results with experimental data.
export_results_to_csv(save_path)Export detailed results to CSV for further analysis.
generate_evaluation_report([save_path, ...])Generate a comprehensive evaluation report.
plot_prediction_vs_experimental([save_path, ...])Create a scatter plot comparing predicted vs experimental values.
plot_residuals([save_path, ...])Create residual plots to assess model fit quality.
Simulate the optimized model on all experimental conditions.
- simulate_optimized_model() Dict[source]¶
Simulate the optimized model on all experimental conditions.
- calculate_evaluation_metrics() Dict[source]¶
Calculate evaluation metrics comparing simulation results with experimental data.
- plot_prediction_vs_experimental(save_path: str | None = None, show_confidence_interval: bool = False, show_experiment_ids: bool = False, figsize: Tuple[int, int] = (8, 6)) Figure[source]¶
Create a scatter plot comparing predicted vs experimental values.
- plot_residuals(save_path: str | None = None, show_experiment_ids: bool = False, figsize: Tuple[int, int] = (9, 4)) Figure[source]¶
Create residual plots to assess model fit quality.
- BNMPy.result_evaluation.evaluate_optimization_result(optimizer_result, parameter_optimizer, output_dir: str = '.', plot_residuals: bool = True, save: bool = True, detailed: bool = False, figsize: Tuple[int, int] = (8, 6), show_confidence_interval: bool = False) ResultEvaluator[source]¶
Convenience function to perform a complete evaluation of optimization results.
This function generates and saves the following files in output_dir: - prediction_vs_experimental.png: Scatter plot of predicted vs experimental values - residual_analysis.png: Residual plots (if plot_residuals=True) - evaluation_report.txt: Text report with metrics and optimization config - detailed_results.csv: CSV file with all data points - pbn.txt: Optimized PBN model in text format
- BNMPy.result_evaluation.evaluate_pbn(pbn, experiments, output_dir: str = '.', plot_residuals: bool = True, save: bool = True, detailed: bool = True, config: dict | None = None, Measured_formula: str | None = None, normalize: bool = False, figsize: Tuple[int, int] = (8, 6), show_confidence_interval: bool = False)[source]¶
Evaluate a PBN directly against experiment data (list or CSV).
This function generates and saves the following files in output_dir: - prediction_vs_experimental.png: Scatter plot of predicted vs experimental values - residual_analysis.png: Residual plots (if plot_residuals=True) - evaluation_report.txt: Text report with metrics - detailed_results.csv: CSV file with all data points - pbn.txt: PBN model in text format
Basic Usage¶
Evaluating Optimization Results¶
import BNMPy
# Run optimization
optimizer = BNMPy.ParameterOptimizer(pbn, "experiments.csv")
result = optimizer.optimize(method='differential_evolution')
# Evaluate results with plots and report
evaluator = BNMPy.evaluate_optimization_result(
result,
optimizer,
output_dir="evaluation_results",
plot_residuals=True,
save=True,
detailed=True,
figsize=(8, 6)
)
Evaluating a PBN¶
import BNMPy
# Evaluate an existing PBN
pbn = BNMPy.load_pbn_from_file("network.txt")
exp_data = BNMPy.ExperimentData("experiments.csv")
results = BNMPy.evaluate_pbn(
pbn,
exp_data,
output_dir="pbn_evaluation",
config={'steady_state': {'method': 'monte_carlo'}}
)
print(f"MSE: {results['mse']:.4f}")
print(f"Correlation: {results['correlation']:.3f}")
Generated Plots¶
The evaluation functions generate several plots to assess model quality:
1. Prediction vs Experimental Plot¶
prediction_vs_experimental.png
Scatter plot comparing predicted vs experimental values:
X-axis: Experimental values from CSV file
Y-axis: Predicted values from the model
Perfect prediction line: Red dashed line (y=x)
Regression line: Green line showing linear relationship
Confidence interval: Light green shaded area (95% confidence)
Statistics: Correlation coefficient (r), p-value, and MSE
2. Residuals Plot¶
residuals.png
Shows distribution of prediction errors:
Left panel: Residuals vs Predicted values
Residuals = Predicted - Experimental
Horizontal red line at y=0
Right panel: Histogram of residuals
Distribution of prediction errors
Shows mean and standard deviation
3. Optimization History Plot¶
optimization_history.png
Shows MSE progression during optimization:
X-axis: Optimization iterations
Y-axis: Mean Squared Error (MSE)
Line: MSE values over iterations
Stagnation periods: Highlighted if enabled
Output Files¶
When save=True, the function generates:
detailed_results.csv: Per-experiment predictions and errors
evaluation_report.txt: Summary statistics and model performance
prediction_vs_experimental.png: Prediction quality plot
residuals.png: Residual analysis (if
plot_residuals=True)
Example Output Structure¶
evaluation_results/
├── detailed_results.csv
├── evaluation_report.txt
├── prediction_vs_experimental.png
└── residuals.png
Evaluation Report¶
Text file with summary statistics:
Optimization Evaluation Report
==============================
Final MSE: 0.0123
Correlation: 0.89
P-value: 1.2e-15
RMSE: 0.111
MAE: 0.089
Number of experiments: 10
Number of measurements: 40
Optimization converged successfully
Iterations: 245
Function evaluations: 3675
See Also¶
BNMPy.parameter_optimizer - Main optimization interface
BNMPy.simulation_evaluator - Simulation evaluation
BNMPy.experiment_data - Experimental data handling