GADD Core Module

Core GADD functionality.

class gadd.gadd.TrainingConfig(pop_size: int = 16, sequence_length: int = 8, parent_fraction: float = 0.25, n_iterations: int = 20, mutation_probability: float = 0.75, optimization_level: int = 1, shots: int = 4000, num_colors: int = 3, decoupling_group: ~gadd.group_operations.DecouplingGroup = <factory>, mode: str = 'uniform', dynamic_mutation: bool = True, mutation_decay: float = 0.1)[source]

Bases: object

Configuration parameters for GADD training.

This class encapsulates all hyperparameters and settings needed to configure the genetic algorithm optimization process for dynamical decoupling sequences. It provides sensible defaults based on the empirical findings from the GADD paper while allowing full customization of the training process.

Parameters:
  • pop_size – Size of the population (K in the paper).

  • sequence_length – Length of each DD sequence (L in the paper).

  • parent_fraction – Fraction of population to use as parents for reproduction.

  • n_iterations – Number of GA iterations to run.

  • mutation_probability – Initial probability of mutation.

  • optimization_level – Qiskit transpilation optimization level.

  • shots – Number of shots for quantum circuit execution.

  • num_colors – Number of distinct sequences per strategy (k in the paper).

  • decoupling_group – The decoupling group to use from DecouplingGroup.

  • mode – Mode for generating initial population (uniform or random).

  • dynamic_mutation – Whether to dynamically adjust mutation probability.

  • mutation_decay – Factor to adjust mutation probability.

pop_size: int = 16
sequence_length: int = 8
parent_fraction: float = 0.25
n_iterations: int = 20
mutation_probability: float = 0.75
optimization_level: int = 1
shots: int = 4000
num_colors: int = 3
decoupling_group: DecouplingGroup
mode: str = 'uniform'
dynamic_mutation: bool = True
mutation_decay: float = 0.1
to_dict() Dict[str, Any][source]

Serializes training configuration to a dictionary for JSON export or checkpointing.

Returns:

Dictionary representation suitable for JSON serialization.

classmethod from_dict(data: Dict[str, Any]) TrainingConfig[source]

Deserializes a dictionary object to a TrainingConfig object.

Parameters:

data – The serialized input dictionary.

Returns:

The deserialized TrainingConfig instance.

__str__() str[source]

Return human-readable string representation.

__init__(pop_size: int = 16, sequence_length: int = 8, parent_fraction: float = 0.25, n_iterations: int = 20, mutation_probability: float = 0.75, optimization_level: int = 1, shots: int = 4000, num_colors: int = 3, decoupling_group: ~gadd.group_operations.DecouplingGroup = <factory>, mode: str = 'uniform', dynamic_mutation: bool = True, mutation_decay: float = 0.1) None
class gadd.gadd.TrainingState(population: ~typing.List[str] = <factory>, iteration: int = 0, best_scores: ~typing.List[float] = <factory>, best_sequences: ~typing.List[str] = <factory>, mutation_probability: float = 0.75, iteration_data: ~typing.List[~typing.Dict[str, ~typing.Any]] = <factory>, timestamp: str = <factory>)[source]

Bases: object

State of GADD training that can be serialized and resumed.

This class encapsulates the complete state of a genetic algorithm training session, enabling checkpointing and resumption of long-running optimization processes.

population: List[str]
iteration: int = 0
best_scores: List[float]
best_sequences: List[str]
mutation_probability: float = 0.75
iteration_data: List[Dict[str, Any]]
timestamp: str
to_dict() Dict[str, Any][source]

Serializes training state to a dictionary for checkpointing.

Returns:

Dictionary representation suitable for JSON serialization.

classmethod from_dict(data: Dict[str, Any]) TrainingState[source]

Deserializes a dictionary to a TrainingState object.

Parameters:

data – The serialized input dictionary.

Returns:

The deserialized TrainingState instance.

__init__(population: ~typing.List[str] = <factory>, iteration: int = 0, best_scores: ~typing.List[float] = <factory>, best_sequences: ~typing.List[str] = <factory>, mutation_probability: float = 0.75, iteration_data: ~typing.List[~typing.Dict[str, ~typing.Any]] = <factory>, timestamp: str = <factory>) None
class gadd.gadd.TrainingResult(best_sequence: DDStrategy, best_score: float, iteration_data: List[Dict[str, Any]], benchmark_scores: Dict[str, float], final_population: List[str], config: TrainingConfig, training_time: float, benchmark_history: Dict[str, List[float]] | None = None)[source]

Bases: object

Results from GADD training.

This class encapsulates all outputs and metrics from a completed genetic algorithm training session, including the best strategy found, performance data, and comparison against standard dynamical decoupling sequences.

best_sequence: DDStrategy
best_score: float
iteration_data: List[Dict[str, Any]]
benchmark_scores: Dict[str, float]
final_population: List[str]
config: TrainingConfig
training_time: float
benchmark_history: Dict[str, List[float]] | None = None
__post_init__()[source]

Extract benchmark history from iteration data if not already provided.

to_dict() Dict[str, Any][source]

Serializes training result to a dictionary for export or analysis.

Returns:

Dictionary representation suitable for JSON serialization.

__str__() str[source]

Return human-readable string representation.

__init__(best_sequence: DDStrategy, best_score: float, iteration_data: List[Dict[str, Any]], benchmark_scores: Dict[str, float], final_population: List[str], config: TrainingConfig, training_time: float, benchmark_history: Dict[str, List[float]] | None = None) None
class gadd.gadd.GADD(backend: Backend | None = None, utility_function: UtilityFunction | None = None, coloring: Dict | ColorAssignment | None = None, seed: int | SeedSequence | BitGenerator | Generator | None = None, config: TrainingConfig | None = None)[source]

Bases: object

Genetic Algorithm for Dynamical Decoupling optimization.

This class implements the core GADD algorithm for empirically optimizing dynamical decoupling sequences on quantum processors using genetic algorithms. It evolves populations of DD strategies to find the best-performing strategy as evaluated by the specified utility function.

__init__(backend: Backend | None = None, utility_function: UtilityFunction | None = None, coloring: Dict | ColorAssignment | None = None, seed: int | SeedSequence | BitGenerator | Generator | None = None, config: TrainingConfig | None = None)[source]

Initialize the GADD optimizer with backend and configuration parameters.

Parameters:
  • backend – Quantum backend for circuit execution and device properties.

  • utility_function – Function to evaluate circuit performance.

  • coloring – Qubit coloring for multi-color DD strategies.

  • seed – Random seed for reproducible results.

  • config – Training configuration parameters.

property seed
property backend
property utility_function
property coloring
apply_strategy(strategy: DDStrategy, target_circuit: QuantumCircuit, backend: Backend | None = None, staggered: bool = False) QuantumCircuit[source]

Apply a DD strategy to a target circuit.

This is a convenience method that handles the circuit padding with the appropriate coloring for the backend.

Parameters:
  • strategy – DD strategy to apply.

  • target_circuit – Circuit to apply DD to.

  • backend – Backend for coloring (uses self.backend if None).

  • staggered – Whether to apply CR-aware staggering for crosstalk suppression.

Returns:

Circuit with DD sequences applied.

train(sampler: SamplerV2, training_circuit: QuantumCircuit, utility_function: Callable | UtilityFunction | None = None, mode: str | None = None, save_iterations: bool = True, benchmark_strategies: List[str] | List[DDStrategy] | None = None, evaluate_benchmarks_each_iteration: bool = False, resume_from_state: TrainingState | None = None, save_path: str | None = None) Tuple[DDStrategy, TrainingResult][source]

Train DD sequences using genetic algorithm optimization.

This method executes the core GADD algorithm, evolving a population of DD strategies over multiple generations to optimize performance on the training circuit. The process includes population initialization, fitness evaluation, selection, crossover, mutation, and optional benchmarking against standard DD sequences.

Parameters:
  • samplerqiskit_ibm_runtime.SamplerV2 for circuit execution.

  • training_circuit – Quantum circuit to optimize DD sequences for.

  • utility_function – Function to evaluate circuit performance, either a callable(circuit, result) -> float or UtilityFunction.

  • mode – Population initialization mode (random or uniform).

  • save_iterations – Whether to save iteration data for analysis.

  • benchmark_strategies – DD strategies to compare against, either standard sequence names or DDStrategy objects.

  • evaluate_benchmarks_each_iteration – Whether to evaluate benchmarks at each iteration or only at the end.

  • resume_from_state – Previous TrainingState to resume from.

  • save_path – Directory path to save training checkpoints.

Returns:

Tuple of the best DDStrategy and TrainingResult.

load_training_state(checkpoint_path: str) TrainingState[source]

Load training state from checkpoint file.

plot_training_progress(results: TrainingResult, save_path: str | None = None)[source]

Plot training progression and comparison data.

evaluate(strategy: DDStrategy, target_circuit: QuantumCircuit, sampler: SamplerV2, utility_function: Callable[[QuantumCircuit, Any], float] | None = None, staggered: bool = False) Dict[str, Any][source]

Run a specific DD strategy on a target circuit and evaluate its utility.

Parameters:
  • strategy – DD strategy to apply.

  • target_circuit – Target quantum circuit.

  • sampler – Qiskit sampler for execution.

  • utility_function – Optional utility function to evaluate performance.

  • staggered – Whether to apply CR-aware staggering.

Returns:

Dictionary with execution results.