Supported Samplers
Tunny can utilize a number of sampling techniques.
The table below summarizes the optimizations supported by each sampler. Note that Tunny's UI will automatically display the available methods depending on the problem.
| Name | Single-Objective | Multi-Objective | Constraints | Human-in-the-loop |
|---|---|---|---|---|
| AUTO Sampler | ✓ | ✓ | ✓ | |
| TPE | ✓ | ✓ | ✓ | ✓ |
| cTPE | ✓ | ✓ | ||
| GP-Optuna | ✓ | ✓ | ✓ | |
| GP-BoTorch | ✓ | ✓ | ✓ | |
| GP-Preferential | ✓ | |||
| HEBO | ✓ | |||
| NSGA-II | ✓ | ✓ | ✓ | |
| NSGA-III | ✓ | ✓ | ✓ | |
| MOEA/D | ✓ | |||
| DE | ✓ | |||
| CMA-ES | ✓ | |||
| MO-CMA-ES | ✓ | |||
| INGO | ✓ | |||
| Random | ✓ | ✓ | ||
| QMC | ✓ | ✓ | ||
| BruteForce | ✓ | ✓ |
The specific characteristics of each sampling technique are as follows
| Name | Type | Note |
|---|---|---|
| AUTO Sampler | Auto | Automatically selects the optimal sampler based on problem characteristics (objectives, constraints, variable types). Recommended for beginners or when unsure which sampler to use. |
| TPE | Bayesian Optimization | Tree-structured Parzen Estimator. One of the most versatile methods alongside NSGA-II. Builds probabilistic models to efficiently explore search spaces. Excellent for expensive function evaluations and high-dimensional problems with mixed parameter types. |
| cTPE | Bayesian Optimization | Constrained TPE with enhanced constraint handling. Models constraint violations explicitly alongside objectives. Use when standard TPE struggles with constraint satisfaction or when strict constraint requirements exist. |
| GP-Optuna | Bayesian Optimization | Gaussian Process with Optuna's implementation. Faster than GP-BoTorch while maintaining good optimization quality. Provides uncertainty estimates and works well for smooth, continuous objective functions with moderate dimensionality. |
| GP-BoTorch | Bayesian Optimization | Gaussian Process using Facebook's BoTorch library. Highly flexible with advanced acquisition functions and state-of-the-art algorithms. Slower but offers superior optimization quality. Best when optimization quality is more important than speed. |
| GP-Preferential | Bayesian Optimization | Designed exclusively for Human-in-the-loop optimization. Learns from user preferences and pairwise comparisons instead of numerical objectives. Ideal for design optimization based on aesthetic preferences or subjective quality assessment. |
| HEBO | Bayesian Optimization | Heteroscedastic Evolutionary Bayesian Optimization. State-of-the-art algorithm that excels at highly nonlinear and multimodal problems. Combines evolutionary strategies with Bayesian optimization for faster convergence on challenging landscapes. |
| NSGA-II | Evolutionary Algorithm | Non-dominated Sorting Genetic Algorithm II. Widely used versatile method (also used in Wallacei). Uses non-dominated sorting and crowding distance to maintain diversity. Recommended for 2-3 objective problems requiring diverse Pareto fronts. |
| NSGA-III | Evolutionary Algorithm | Enhanced NSGA-II for many-objective optimization (3+ objectives). Uses reference points to maintain diversity in high-dimensional objective spaces. Better convergence and diversity for 4 or more objectives compared to NSGA-II. |
| MOEA/D | Evolutionary Algorithm | Multi-Objective EA based on Decomposition. Decomposes multi-objective problems into single-objective subproblems using scalarization. Efficient for 3- objectives with good computational performance and well-distributed Pareto fronts. |
| DE | Evolutionary Algorithm | Differential Evolution. Robust global optimization using vector differences. Effective for continuous, non-differentiable functions with many local optima. Simple yet powerful for rugged fitness landscapes. |
| CMA-ES | Evolutionary Strategy | Covariance Matrix Adaptation Evolution Strategy. One of the most powerful single-objective continuous optimizers. Adapts covariance matrix to learn problem structure. Very fast convergence on smooth problems with self-adaptive search distribution. |
| MO-CMA-ES | Evolutionary Strategy | Multi-objective extension of CMA-ES. Maintains multiple search distributions for different Pareto front regions. Fast convergence for 2-3 objectives on smooth landscapes. Efficient use of function evaluations. |
| INGO | Evolutionary Strategy | Implicit Natural Gradient Optimizer. Uses information geometry and natural gradient methods for efficient single-objective optimization. Fast convergence on well-structured problems with limited function evaluations. |
| Random | Sampling | Pure random sampling with uniform distribution. No optimization strategy. Useful for baseline performance, initial exploration, generating diverse populations, and testing purposes. Provides unbiased exploration of search space. |
| QMC | Sampling | Quasi-Monte Carlo using low-discrepancy sequences (e.g., Sobol). Better space coverage than random sampling with more even distribution. Reduced clustering, more efficient exploration. Effective for design of experiments and sensitivity analysis. |
| BruteForce | Sampling | Exhaustively evaluates all discrete variable combinations. Guarantees finding global optimum for discrete problems but only feasible for small search spaces. Computational cost grows exponentially. Use with caution. |