跳到内容

差分进化

smac.acquisition.maximizer.differential_evolution #

DifferentialEvolution #

DifferentialEvolution(
    configspace: ConfigurationSpace,
    acquisition_function: AbstractAcquisitionFunction
    | None = None,
    max_iter: int = 1000,
    challengers: int = 50000,
    strategy: str = "best1bin",
    polish: bool = True,
    mutation: tuple[float, float] = (0.5, 1.0),
    recombination: float = 0.7,
    seed: int = 0,
)

继承自: 抽象采集函数最大化器

通过 scipy 中的 DifferentialEvolutionSolvers 获取候选解。

根据 scipy 1.9.2 文档

“寻找多元函数的全局最小值。差分进化本质上是随机的(不使用梯度方法)来寻找最小值,并且可以搜索候选解空间的广阔区域,但通常比传统的基于梯度的方法需要更多的函数评估次数。该算法由 Storn 和 Price 提出 [1]。”

[1] Storn, R and Price, K, Differential Evolution - a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, Journal of Global Optimization, 1997, 11, 341 - 359.

参数#

configspace : 配置空间 acquisition_function : 抽象采集函数 challengers : 整数,默认为 50000 挑战者数量。 max_iter: 整数 | 无,默认为 None DE 将执行的最大迭代次数。 strategy: 字符串,默认为 "best1bin" 用于 DE 的策略。 polish: 布尔值,默认为 True 是否使用 L-BFGS-B 对最终解进行精修。 mutation: 元组[浮点数, 浮点数],默认为 (0.5, 1.0) 变异常数。 recombination: 浮点数,默认为 0.7 交叉常数。 seed : 整数,默认为 0

源代码位于 smac/acquisition/maximizer/differential_evolution.py
def __init__(
    self,
    configspace: ConfigurationSpace,
    acquisition_function: AbstractAcquisitionFunction | None = None,
    max_iter: int = 1000,
    challengers: int = 50000,
    strategy: str = "best1bin",
    polish: bool = True,
    mutation: tuple[float, float] = (0.5, 1.0),
    recombination: float = 0.7,
    seed: int = 0,
):
    super().__init__(configspace, acquisition_function, challengers, seed)
    # raise NotImplementedError("DifferentialEvolution is not yet implemented.")
    self.max_iter = max_iter
    self.strategy = strategy
    self.polish = polish
    self.mutation = mutation
    self.recombination = recombination

acquisition_function property writable #

acquisition_function: AbstractAcquisitionFunction | None

用于最大化的采集函数。

meta property #

meta: dict[str, Any]

返回创建对象的元数据。

maximize #

maximize(
    previous_configs: list[Configuration],
    n_points: int | None = None,
    random_design: AbstractRandomDesign | None = None,
) -> Iterator[Configuration]

使用由子类实现的 _maximize 方法最大化采集函数。

参数#

previous_configs: 列表[配置] 先前已评估的配置。 n_points: 整数,默认为 None 要采样的点数 和 要返回的配置数。 如果未指定 n_points,则将使用 self._challengers。 语义取决于具体实现。 random_design: 抽象随机设计,默认为 None 返回的 ChallengerList 的一部分,以便我们可以通过随机设计定义的方案交错随机配置。 在此函数的末尾调用方法 random_design.next_iteration()

返回值#

challengers : Iterator[Configuration] 由配置组成的迭代器。

源代码位于 smac/acquisition/maximizer/abstract_acquisition_maximizer.py
def maximize(
    self,
    previous_configs: list[Configuration],
    n_points: int | None = None,
    random_design: AbstractRandomDesign | None = None,
) -> Iterator[Configuration]:
    """Maximize acquisition function using `_maximize`, implemented by a subclass.

    Parameters
    ----------
    previous_configs: list[Configuration]
        Previous evaluated configurations.
    n_points: int, defaults to None
        Number of points to be sampled & number of configurations to be returned. If `n_points` is not specified,
        `self._challengers` is used. Semantics depend on concrete implementation.
    random_design: AbstractRandomDesign, defaults to None
        Part of the returned ChallengerList such that we can interleave random configurations
        by a scheme defined by the random design. The method `random_design.next_iteration()`
        is called at the end of this function.

    Returns
    -------
    challengers : Iterator[Configuration]
        An iterable consisting of configurations.
    """
    if n_points is None:
        n_points = self._challengers

    def next_configs_by_acquisition_value() -> list[Configuration]:
        assert n_points is not None
        # since maximize returns a tuple of acquisition value and configuration,
        # and we only need the configuration, we return the second element of the tuple
        # for each element in the list
        return [t[1] for t in self._maximize(previous_configs, n_points)]

    challengers = ChallengerList(
        self._configspace,
        next_configs_by_acquisition_value,
        random_design,
    )

    if random_design is not None:
        random_design.next_iteration()

    return challengers

check_kwarg #

check_kwarg(cls: type, kwarg_name: str) -> bool

检查给定类在其 init 方法中是否接受特定的关键字参数。

参数#
cls (type): The class to inspect.
kwarg_name (str): The name of the keyword argument to check.
返回值#
bool: True if the class's __init__ method accepts the keyword argument,
      otherwise False.
源代码位于 smac/acquisition/maximizer/differential_evolution.py
def check_kwarg(cls: type, kwarg_name: str) -> bool:
    """
    Checks if a given class accepts a specific keyword argument in its __init__ method.

    Parameters
    ----------
        cls (type): The class to inspect.
        kwarg_name (str): The name of the keyword argument to check.

    Returns
    -------
        bool: True if the class's __init__ method accepts the keyword argument,
              otherwise False.
    """
    # Get the signature of the class's __init__ method
    init_signature = inspect.signature(cls.__init__)  # type: ignore[misc]

    # Check if the kwarg_name is present in the signature as a parameter
    for param in init_signature.parameters.values():
        if param.name == kwarg_name and param.default != inspect.Parameter.empty:
            return True  # It accepts the kwarg
    return False  # It does not accept the kwarg