And finally, the code

Assuming you have already downloaded the code we described at the beginning of the chapter, let's now take a look at what's happening. To start, let's open the TestParallelMetaBenchmarks project and open the main.cs file. This is the file we will be working with for the following code.

First, we need to create some very important variables which will become settings for the optimization layer. We have commented each so that you know what they are for, shown as follows:

// Set this close to 50 and a multiple of the number of processors, e.g. 8.
static readonly int NumRuns = 64;
// The total dimensions.
static readonly int Dim = 5;
// The dimension factor.
static readonly int DimFactor = 2000;
// The total number of times we will loop to determine optimal parameters.
static readonly int NumIterations = DimFactor * Dim;

Next, we are going to create our optimizer. There are several optimizers included with SwarmOps, but for our purposes we will use the MOL optimizer. MOL stands for Many Optimizing Liaisons, which is devised as a simplification to the original Particle Swarm Optimization method from Eberhart et al [1][2]. The Many Optimizing Liaisons method does not have any attraction to the particles' own best-known position, and the algorithm also randomly selects which particle to update instead of iterating over the entire swarm. It is similar to the Social Only Particle Swarm Optimization suggested by Kennedy [3] and was studied more thoroughly by Pedersen et al [4], who found that it can outperform the standard Particle Swarm Optimization approach and has more easily-tunable control parameters. Whew, that was a mouthful, wasn't it?

// The optimizer whose control parameters are to be tuned.
static Optimizer Optimizer = new MOL();

Next is the problem(s) that we want to optimize. You can choose to have one or multiple problems solved at the same time, but it is often easier to solve one optimization tuning problem at a time.

The optimizer is having its control parameters tuned to work well on the included problem(s), shown as follows. The numbers are the weights that signify the mutual importance of the problems in tuning. The higher the weight, the more important it is, as shown in the following code:

static WeightedProblem[] WeightedProblems = new WeightedProblem[]
{
new WeightedProblem(1.0, new Sphere(Dim, NumIterations)),
};
Next we have our settings for the meta-optimization layer.
static readonly int MetaNumRuns = 5;
static readonly int MetaDim = Optimizer.Dimensionality;
static readonly int MetaDimFactor = 20;
static readonly int MetaNumIterations = MetaDimFactor * MetaDim;

The meta-fitness aspect consists of computing optimization performance for the problems we listed over several optimization runs and summing the results. For ease of use, we wrap the optimizer in a MetaFitness object which takes care of this for us, as follows:

static SwarmOps.Optimizers.Parallel.MetaFitness MetaFitness = new SwarmOps.Optimizers.Parallel.MetaFitness(Optimizer, WeightedProblems, NumRuns, MetaNumIterations);

Now we need to create out meta-optimizer object, as shown in the following snippet. For this, we will use the Local Unimodal Sampling (LUS) optimizer originally created by Pedersen 1. This object does local sampling with an exponential deduction of the sampling range. It works well for many optimization problems, especially when only short runs are used or allowed. It is particularly well-suited as the overlaying meta-optimizer when tuning parameters for another optimizer:

static Optimizer MetaOptimizer = new LUS(LogSolutions);

Finally, we will wrap the meta-optimizer in a Statistics object to log our results. We then repeat a number of meta-optimization runs using the MetaRepeat object, shown as follows:

static readonly bool StatisticsOnlyFeasible = true;
static Statistics Statistics = new Statistics(MetaOptimizer, StatisticsOnlyFeasible);
static Repeat MetaRepeat = new RepeatMin(Statistics, MetaNumRuns);
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset