Nevergrad - A gradient-free optimization platform
This documentation is a work in progress, feel free to help us update/improve/restucture it!
Quick start
nevergrad is a Python 3.6+ library. It can be installed with:
pip install nevergrad
You can find other installation options (including for Windows users) in the Getting started section.
Feel free to join Nevergrad users Facebook group.
Minimizing a function using an optimizer (here NgIohTuned, our adaptative optimization algorithm) can be easily run with:
import nevergrad as ng
def square(x):
return sum((x - 0.5) ** 2)
# optimization on x as an array of shape (2,)
optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100)
recommendation = optimizer.minimize(square) # best value
print(recommendation.value)
# >>> [0.49999998 0.50000004]
Convergence of a population of points to the minima with two-points DE.
nevergrad can also support bounded continuous variables as well as discrete variables, and mixture of those.
To do this, one can specify the input space:
import nevergrad as ng
def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
# optimal for learning_rate=0.2, batch_size=4, architecture="conv"
return (learning_rate - 0.2) ** 2 + (batch_size - 4) ** 2 + (0 if architecture == "conv" else 10)
# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
# a log-distributed scalar between 0.001 and 1.0
learning_rate=ng.p.Log(lower=0.001, upper=1.0),
# an integer from 1 to 12
batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
# either "conv" or "fc"
architecture=ng.p.Choice(["conv", "fc"]),
)
optimizer = ng.optimizers.NGOpt(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)
print(recommendation.kwargs) # shows the recommended keyword arguments of the function
# >>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}
Learn more about parametrization in the Parametrization section!
CONTENTS
- Getting started
- Installation and configuration on Windows
- How to perform optimization
- Basic example
- Using several workers
- Ask and tell interface
- Choosing an optimizer
- Telling non-asked points, or suggesting points
- Adding callbacks
- Optimization with constraints
- Optimizing machine learning hyperparameters
- Example with permutation
- Example of chaining, or inoculation, or initialization of an evolutionary algorithm
- Multiobjective minimization with Nevergrad
- Reproducibility
- Parametrizing your optimization
- Strict constraints in continuous optimization NEW
- Running algorithm benchmarks
- Contributing to Nevergrad
- Open Optimization Competition 2020
API REFERENCE
- Optimization API (ng.optimizers)
OptimizerOptimizer.ask()Optimizer.dimensionOptimizer.dump()Optimizer.enable_pickling()Optimizer.load()Optimizer.minimize()Optimizer.num_askOptimizer.num_objectivesOptimizer.num_tellOptimizer.num_tell_not_askedOptimizer.pareto_front()Optimizer.provide_recommendation()Optimizer.recommend()Optimizer.register_callback()Optimizer.remove_all_callbacks()Optimizer.suggest()Optimizer.tell()
- Configurable optimizers
- Optimizers
AXPBFGSCMABFGSCMAPlusBayesOptimCMCMACMandAS2CMandAS3CSECCSEC10CSEC11Carola3ChainingChoiceBaseConfPSOConfPortfolioConfSplitOptimizerConfiguredPSOEDAEMNAF2SQPCMAF3SQPCMAFSQPCMAForceMultiCobylaLogBFGSCMALogBFGSCMAPlusLogMultiBFGSLogMultiBFGSPlusLogSQPCMALogSQPCMAPlusMEDAMPCEDAMetaCMAMultiBFGSMultiBFGSPlusMultiCobylaMultiCobylaPlusMultiDiscreteMultiSQPMultiSQPPlusMultipleSingleRunsNGDSRWNGONGOptNGOpt10NGOpt12NGOpt13NGOpt14NGOpt15NGOpt16NGOpt21NGOpt36NGOpt38NGOpt39NGOpt4NGOpt8NGOptBaseNGOptDSBaseNGOptFNGOptF2NGOptF3NGOptF5NGOptRWNgDSNgDS11NgDS2NgIohNgIoh10NgIoh11NgIoh12NgIoh12bNgIoh13NgIoh13bNgIoh14NgIoh14bNgIoh15NgIoh15bNgIoh16NgIoh17NgIoh18NgIoh19NgIoh2NgIoh20NgIoh21NgIoh3NgIoh4NgIoh5NgIoh6NgIoh7NgIoh8NgIoh9NgIohRW2NgIohTunedNoisyBanditNoisySplitPCEDAParametrizedCMAParametrizedMetaModelParametrizedOnePlusOneParametrizedTBPSAPortfolioRescaledSPSASQPCMASQPCMAPlusShiwaSplitOptimizerSqrtBFGSCMASqrtBFGSCMAPlusSqrtMultiBFGSSqrtMultiBFGSPlusSqrtSQPCMASqrtSQPCMAPlusWizcGArescaled()smooth_copy()
- Parametrization API (ng.p)
- Callbacks API (ng.callbacks)
EXAMPLES
- Nevergrad for machine learning
- Nevergrad for R
- Benchmarks
- Noisy optimization
- One-shot optimization
- Comparison-based methods for ill-conditioned problems
- Ill-conditioned function
- Discrete
- List of benchmarks
basic()compabasedillcond()dim10_select_one_feature()dim10_select_two_features()dim10_smallbudget()doe_dim4()illcond()metanoise()noise()oneshot1()oneshot2()oneshot3()oneshot4()repeated_basic()adversarial_attack()alldes()aquacrop_fao()bonnans()ceviche()complex_tsp()constrained_illconditioned_parallel()control_problem()deceptive()doe()double_o_seven()far_optimum_es()fishing()fiveshots()harderparallel()hdbo4d()hdmultimodal()illcondi()illcondipara()image_multi_similarity()image_multi_similarity_cv()image_multi_similarity_pgan()image_multi_similarity_pgan_cv()image_quality()image_quality_cv()image_quality_cv_pgan()image_quality_pgan()image_quality_proxy()image_quality_proxy_pgan()image_similarity()image_similarity_and_quality()image_similarity_and_quality_cv()image_similarity_and_quality_cv_pgan()image_similarity_and_quality_pgan()image_similarity_pgan()image_single_quality()image_single_quality_pgan()instrum_discrete()keras_tuning()lowbudget()lsgo()mixsimulator()mlda()mldakmeans()mltuning()mono_rocket()morphing_pgan_quality()ms_bbob()multi_ceviche()multi_ceviche_c0()multi_ceviche_c0_warmstart()multi_ceviche_c0p()multimodal()multiobjective_example()multiobjective_example_hd()multiobjective_example_many()multiobjective_example_many_hd()naive_seq_keras_tuning()naive_seq_mltuning()naive_veryseq_keras_tuning()naivemltuning()nano_naive_seq_mltuning()nano_naive_veryseq_mltuning()nano_seq_mltuning()nano_veryseq_mltuning()neuro_control_problem()newdoe()noisy()nozp_noms_bbob()olympus_emulators()olympus_surfaces()oneshot()oneshot_mltuning()paraalldes()parahdbo4d()parallel()parallel_small_budget()paramultimodal()pbbob()pbo_reduced_suite()pbo_suite()pbt()photonics()photonics2()powersystems()ranknoisy()realworld()reduced_yahdlbbbob()refactor_optims()rocket()seq_keras_tuning()seq_mltuning()sequential_fastgames()sequential_instrum_discrete()sequential_topology_optimization()simple_tsp()skip_ci()small_photonics()small_photonics2()smallbudget_lsgo()spsa_benchmark()team_cycling()topology_optimization()ultrasmall_photonics()ultrasmall_photonics2()unit_commitment()veryseq_keras_tuning()verysmall_photonics()verysmall_photonics2()yabbob()yabigbbob()yaboundedbbob()yaboxbbob()yaconstrainedbbob()yahdbbob()yahdlbbbob()yahdnoisybbob()yahdnoisysplitbbob()yahdsplitbbob()yamegapenbbob()yamegapenbigbbob()yamegapenboundedbbob()yamegapenboxbbob()yamegapenhdbbob()yanoisybbob()yanoisysplitbbob()yaonepenbbob()yaonepenbigbbob()yaonepenboundedbbob()yaonepenboxbbob()yaonepennoisybbob()yaonepenparabbob()yaonepensmallbbob()yaparabbob()yapenbbob()yapenboundedbbob()yapenboxbbob()yapennoisybbob()yapenparabbob()yapensmallbbob()yasmallbbob()yasplitbbob()yatinybbob()yatuningbbob()yawidebbob()zp_ms_bbob()zp_pbbob()
- Working with Pyomo model
- Guiding image generation with Nevergrad
- Diversity
- Lognormal mutations in Nevergrad
- Retrofitting with Nevergrad NEW
STATISTICS
- Benchmarks in Nevergrad
- Comparison on all benchmarks, for the simple regret criterion: the wizard performs best
- Comparing all methods, with a robustness criterion: the wizard still performs best
- Comparing from the point of view of the frequency at which a method is in the three best: the wizard NgIohTuned still performs best
- Benchmarks in Nevergrad (excluding wizards)
- Comparison on all benchmarks, for the simple regret criterion: the wizard performs best
- Comparing all methods, with a robustness criterion: the wizard still performs best
- Comparing from the point of view of the frequency at which a method is in the three best: methods from discrete optimization perform best
Citing
@misc{nevergrad,
author = {J. Rapin and O. Teytaud},
title = {{Nevergrad - A gradient-free optimization platform}},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}
License
nevergrad is released under the MIT license. See LICENSE for additional details about it, as well as our Terms of Use and Privacy Policy.
Copyright © Meta Platforms, Inc.