Skip to content

Anselmoo/useful-optimizer

Repository files navigation

DOI

Useful Optimizer

Useful Optimizer is a dedicated set of optimization algorithms for numeric problems. It's designed to provide a comprehensive collection of optimization techniques that can be easily used and integrated into any project.

Version

The current version of Useful Optimizer is 0.1.2.

Features

  • A wide range of optimization algorithms (54+ implementations).
  • Organized into logical categories for easy discovery.
  • Easy to use and integrate.
  • Suitable for various numeric problems.
  • Having fun to play with the algorithms

Installation

To install Useful Optimizer, you can use pip:

pip install git+https://github.com/Anselmoo/useful-optimizer

Or using uv (recommended):

uv add git+https://github.com/Anselmoo/useful-optimizer

Usage

Here's a basic example of how to use Useful Optimizer:

from opt.metaheuristic import CrossEntropyMethod
from opt.benchmark.functions import shifted_ackley

optimizer = CrossEntropyMethod(
        func=shifted_ackley,
        dim=2,
        lower_bound=-12.768,
        upper_bound=+12.768,
        population_size=100,
        max_iter=1000,
    )
best_solution, best_fitness = optimizer.search()
print(f"Best solution: {best_solution}")
print(f"Best fitness: {best_fitness}")

You can also use the gradient-based optimizers:

from opt.gradient_based import SGD, AdamW
from opt.classical import BFGS
from opt.benchmark.functions import shifted_ackley

# Gradient-based optimization
sgd = SGD(func=shifted_ackley, lower_bound=-12.768, upper_bound=12.768, dim=2, learning_rate=0.01)
best_solution, best_fitness = sgd.search()

# Adam variant with weight decay
adamw = AdamW(func=shifted_ackley, lower_bound=-12.768, upper_bound=12.768, dim=2, weight_decay=0.01)
best_solution, best_fitness = adamw.search()

# Quasi-Newton method
bfgs = BFGS(func=shifted_ackley, lower_bound=-12.768, upper_bound=12.768, dim=2, num_restarts=10)
best_solution, best_fitness = bfgs.search()

Import Styles

All optimizers can be imported in two ways:

# Categorical imports (recommended for discoverability)
from opt.swarm_intelligence import ParticleSwarm
from opt.gradient_based import AdamW
from opt.classical import BFGS

# Direct imports from root (backward compatible)
from opt import ParticleSwarm, AdamW, BFGS

Quick Demo

All optimizers include a standardized demo that can be run directly or customized:

from opt.demo import run_demo
from opt.swarm_intelligence import ParticleSwarm

# Run with default settings
run_demo(ParticleSwarm)

# Or customize parameters
run_demo(
    ParticleSwarm,
    max_iter=200,
    population_size=50,
    c1=2.0,
    c2=2.0
)

You can also run demos directly from the command line:

python -m opt.swarm_intelligence.particle_swarm
python -m opt.gradient_based.adamw
python -m opt.classical.simulated_annealing

Project Structure

Optimizers are organized into categorical subfolders:

opt/
├── gradient_based/      # 11 gradient-based optimizers
├── swarm_intelligence/  # 12 swarm-based optimizers
├── evolutionary/        # 6 evolutionary algorithms
├── classical/           # 9 classical methods
├── metaheuristic/       # 12 metaheuristic algorithms
├── constrained/         # 2 constrained optimization methods
├── probabilistic/       # 2 probabilistic optimizers
└── benchmark/           # Benchmark functions

Implemented Optimizers

The current version of Useful Optimizer includes 54 optimization algorithms, each implemented as a separate module. Each optimizer is linked to its corresponding source code for easy reference and study.

🧠 Gradient-Based Optimizers

These optimizers use gradient information to guide the search process and are commonly used in machine learning and deep learning applications.

  • Adadelta - An adaptive learning rate method that uses only first-order information
  • Adagrad - Adapts the learning rate to the parameters, performing smaller updates for frequently occurring features
  • Adaptive Moment Estimation (Adam) - Combines advantages of AdaGrad and RMSProp with bias correction
  • AdaMax - Adam variant using infinity norm for second moment estimation
  • AdamW - Adam with decoupled weight decay for better regularization
  • AMSGrad - Adam variant with non-decreasing second moment estimates
  • Nadam - Nesterov-accelerated Adam combining Adam with Nesterov momentum
  • Nesterov Accelerated Gradient - Accelerated gradient method with lookahead momentum
  • RMSprop - Adaptive learning rate using moving average of squared gradients
  • SGD with Momentum - SGD enhanced with momentum for faster convergence
  • Stochastic Gradient Descent - Fundamental gradient-based optimization algorithm
🦋 Swarm Intelligence Algorithms

These algorithms are inspired by collective behavior of decentralized, self-organized systems.

🧬 Evolutionary Algorithms

These algorithms use principles of evolution and population dynamics to find optimal solutions.

🎯 Classical Optimization Methods

Traditional optimization methods including local search techniques and classical mathematical approaches.

  • BFGS - Quasi-Newton method approximating the inverse Hessian matrix
  • Conjugate Gradient - Efficient iterative method for solving systems of linear equations
  • Hill Climbing - Local search algorithm that continuously moves toward increasing value
  • L-BFGS - Limited-memory version of BFGS for large-scale optimization
  • Nelder-Mead - Derivative-free simplex method for optimization
  • Powell's Method - Derivative-free optimization using conjugate directions
  • Simulated Annealing - Probabilistic technique mimicking the annealing process in metallurgy
  • Tabu Search - Metaheuristic using memory structures to avoid cycles
  • Trust Region - Robust optimization method using trusted model regions
🔬 Metaheuristic Algorithms

High-level problem-independent algorithmic frameworks for exploring search spaces.

🔧 Constrained & Probabilistic Optimization

Specialized algorithms for constrained problems and probabilistic approaches.

Note

Please note that not all of these algorithms are suitable for all types of optimization problems. Some are better suited for continuous optimization problems, some for discrete optimization problems, and others for specific types of problems like quadratic programming or linear discriminant analysis.

Testing and Documentation

Useful Optimizer includes comprehensive doctests to ensure all examples in the documentation are correct and up-to-date. All optimizer classes and benchmark functions include working examples that can be verified automatically.

Running Tests

To run all tests including doctests:

# Using pytest with doctests
uv run pytest --doctest-modules --doctest-glob="*.md" -v

# Run only doctests for a specific module
uv run pytest opt/benchmark/functions.py --doctest-modules -v

# Run only doctests for optimizers
uv run pytest opt/swarm_intelligence/particle_swarm.py --doctest-modules -v

Example Doctest Usage

All benchmark functions include working examples in their docstrings:

import numpy as np
from opt.benchmark.functions import sphere

# At optimum
result = sphere(np.array([0.0, 0.0]))
print(float(result))  # 0.0

# Away from optimum
result = sphere(np.array([1.0, 1.0]))
print(float(result))  # 2.0

All optimizer classes include usage examples in their docstrings:

from opt.swarm_intelligence.particle_swarm import ParticleSwarm
from opt.benchmark.functions import sphere

optimizer = ParticleSwarm(
    func=sphere, dim=2, lower_bound=-5, upper_bound=5,
    max_iter=10, seed=42
)
solution, fitness = optimizer.search()
print(f"Fitness: {fitness}")  # Should be < 1.0

These examples serve as both documentation and automated tests, ensuring the code examples in docstrings always work correctly.

Contributing

Contributions to Useful Optimizer are welcome! Please read the contributing guidelines before getting started.


Warning

This project was generated with GitHub Copilot and may not be completely verified. Please use with caution and feel free to report any issues you encounter. Thank you!

Warning

Some parts still contain the legacy np.random.rand call. See also: https://docs.astral.sh/ruff/rules/numpy-legacy-random/

About

A dedicated set of optimization algorithms for numeric problems.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

Languages