Core routines
Construct model
Main.NLPSaUT.build_model!
— Methodbuild_model!(model::JuMP.Model, f_fitness::Function, nx::Int, nh::Int, ng::Int, lx::Vector, ux::Vector, x0::Vector=nothing)
Extend model for NLP problem via memoized fitness function.
Arguments:
model::JuMP.Model
: model to append objective and constraintsf_fitness::Function
: fitness function, returning [f, h, g]nx::Int
: number of decision variablesnh::Int
: number of equality constraintsng::Int
: number of inequality constraintslx::Vector
: lower bounds on decision variablesux::Vector
: upper bounds on decision variablesx0::Vector
: initial guessauto_diff::Bool
: whether to use automatic differentiationorder::Int
: order of FiniteDifferences, minimum is 2fd_type::String
: finite-difference method, "forward", "backward", or "central"
Main.NLPSaUT.build_model
— Methodbuild_model(f_fitness::Function, nx::Int, nh::Int, ng::Int, lx::Vector, ux::Vector, x0::Vector=nothing, fd_type::Function=nothing, order::Int=2)
Build model for NLP problem with memoized fitness function.
Arguments:
optimizer
: optimizer to use with the modelf_fitness::Function
: fitness function, returning [f, h, g]nx::Int
: number of decision variablesnh::Int
: number of equality constraintsng::Int
: number of inequality constraintslx::Vector
: lower bounds on decision variablesux::Vector
: upper bounds on decision variablesx0::Vector
: initial guessauto_diff::Bool
: whether to use automatic differentiationorder::Int
: order of FiniteDifferences, minimum is 2fd_type::String
: finite-difference method, "forward", "backward", or "central"
Memoization
Main.NLPSaUT.memoize_fitness
— Methodmemoize_fitness(f_fitness::Function, n_outputs::Int)
Memoize fitness function. Because foo_i
is auto-differentiated with ForwardDiff, our cache needs to work when x
is a Float64
and a ForwardDiff.Dual
.
See: https://jump.dev/JuMP.jl/stable/tutorials/nonlinear/tipsandtricks/#Memoization
Main.NLPSaUT.memoize_fitness_gradient
— Functionmemoize_fitness_gradient(f_fitness::Function, nfitness::Int, fd_type::Function, order::Int=2)
Create memoized gradient computation with method specified by fd_type
- fd_type = "forward"
use FiniteDifferences.forward_fdm()
- fd_type = "central"
use FiniteDifferences.central_fdm()
- fd_type = "backward"
use FiniteDifferences.backward_fdm()