Particle Swarm Optimization with Penalty Function

In this post, the particle swarm optimization with penalty function is introduced, including the overview of the algorithm, the core operation, the advantages and disadvantages of the summary, and so on.

This post is synchronously published on my CSDN blog: https://blog.csdn.net/weixin_45766278/article/details/129244334



1. Overview of algorithm

Particle Swarm Optimization with Penalty Function is a commonly used optimization algorithm, which is mainly used to solve constrained optimization problems.

In traditional particle swarm optimization, particles move in search space to find the best solution. However, in practical problems, there are usually some constraints, such as the range of variables or the inequality constraints of functions. These constraints may result in discontinuous or even infeasible search space. Therefore, particle swarm optimization with penalty function introduces penalties in the objective function to penalize particles that do not meet the constraints to avoid them entering the infeasible area and gradually approach the feasible area during the search process.

Simply put, with the particle swarm algorithm with penalty function, the constraints can be placed in the target function of the particle swarm algorithm to achieve the optimal solution under the constraints.



2. Objective functions and updated formulas

Specifically, the objective function of the particle swarm optimization with penalty function can be expressed as:

f(x) + p(x)

Where x is the variable vector to be optimized, f(x) is the original objective function, and p(x) is the penalty function. It punishes particles that do not meet the constraint conditions according to the degree of violation of the variable vector. There are many forms of penalty function, such as linear penalty function, quadratic penalty function, exponential penalty function, and so on. Which penalty function to choose depends on the characteristics of the actual problem.

In particle swarm optimization with penalty function, each particle needs to consider whether the current position satisfies the constraint when updating its position and velocity. If they are not, the particle needs to be punished with a penalty function to avoid entering the infeasible area. Update the formula as follows:

Speed update: v[i][j] = w * v[i][j] + c1 * rand() * (pbest[i][j] - x[i][j]) + c2 * rand() * (gbest[j] - x[i][j])

Location update: x[i][j] = x[i][j] + v[i][j]

(The above two formulas are conventional formulas of particle swarm optimization algorithm)

Where, v[i][j] represents the velocity of the ith particle in the jth dimension, x[i] [j] represents the position of the ith particle in the jth dimension, pbest[i][j] represents the historical best position of the ith particle, gbest[j] represents the global best position, w, c1 and c2 are inertia weights, individual learning factors and global learning factors respectively, and rand() is a random number function.

When updating the location, if it is found that the variable on a dimension exceeds the value range, it needs to be corrected.

Specifically, if the lower bound is exceeded, it is adjusted to the lower bound plus a random number multiplied by the difference between the upper and lower bounds. If the upper bound is exceeded, it is adjusted to the upper bound minus a random number multiplied by the difference between the upper and lower bounds.

At the same time, for particles that do not meet the constraints, penalty values need to be calculated from the penalty function and added to the target function values of the particles to be considered in the order of fitness.

if (x[i][j] < lb[j]) or (x[i][j] > ub[j]):

x[i][j] = lb[j] + (ub[j] - lb[j]) * rand()

p[i] = f(x[i]) + P(x[i])

(The above three formulas are the main manifestations of the penalty function part)

Note that lb[j] and ub[j] represent the lower and upper bounds for variables on the jth dimension, respectively.



3. Summary of advantages and disadvantages

The advantage of particle swarm optimization with penalty function is that it can handle constrained optimization problems, and has good global search ability and convergence performance.

However, it needs to select an appropriate penalty function according to the specific problem, and attention should be paid to the weight of penalties and the selection of parameters of penalty function in practical application to avoid the algorithm falling into the local optimal solution.





Yitao Li
Yitao Li
Undergraduates

Accelerate the world’s transition to the Internet of Everything.