AnyLogic
Expand
Font size

Calibration experiment

When you have your model structure in place, you may wish to tune some parameters of the model so that its behavior in particular conditions matches a known (historical) pattern. In case there are several parameters to tune it makes sense to use the built-in optimizer to search for the best combination. The objective in this case is to minimize the difference between the observed simulation output and historic data.

The calibration experiment supports two optimization engines:

  • Genetic is an optimization engine based on an evolutionary algorithm, aiming to preserve the diversity of possible solutions and avoid stucking on suboptimal solutions. Instead of generating a single solution during each step, the engine generates a population of solutions, saving the best of them for the next step and so on, until the best possible solution is reached.
  • OptQuest optimization engine is a proprietary tool by OptTek Systems, Inc. which provides a general-purpose, “black-box” global optimization algorithm.

The calibration experiment uses the optimization engine to find the model parameter values that correspond to the simulation output best fitting with the given data. The data may be both in scalar and dataset form. Coefficients may be used in case of multiple criteria. The calibration progress and fitting of each criterion are displayed.

The difference between two sets of data is calculated with the help of the difference() function. The function returns a non-negative value (of the double type) which is a root mean square error (RMSE) — the difference between sets of data passed as arguments. By running several iterations of the calibration experiment, the optimizer obtains several values, among which the minimum value is the one we are looking for, as it means the least difference between two sets of data.

AnyLogic implements two difference() functions. The difference(DataSet ds, TableFunction f) function accepts a dataset and a table function as arguments. The integration range is the intersection of the argument ranges of the dataset and the table function. The difference(DataSet ds1, DataSet ds2) function only accepts two datasets as arguments. The integration range is the intersection of the argument ranges of the two datasets. In both cases, the datasets are linearly interpolated.

You can control the calibration experiment with Java code. Refer to the Functions section for details.

Creating a calibration experiment

To create a calibration experiment

  1. In the Projects view, right-click (macOS: Ctrl + click) the model item and choose New >  Experiment from the popup menu. The New Experiment dialog box is displayed.
  2. Choose the  Calibration option in the Experiment Type list.
  3. Type the experiment name in the Name edit box.
  4. Choose the top-level agent of the experiment from the Top-level agent drop-down list.
  5. If you want to apply model time settings from another experiment, leave the Copy model time settings from check box selected and choose the experiment in the drop-down list to the right.
  6. Click Next to go to the Parameters and Criteria page of the wizard. Here you choose parameters the optimizer will be allowed to vary and the calibration criteria.
  7. All parameters of the top-level agent are listed in the Parameters table. Go to the row of the Parameters table containing the parameter you want to make varied. Click the Type field and choose the type of parameter other than fixed.
    Depending on the type of the parameter, the list of possible values may vary: int or discrete for integer parameters; continuous and discrete for double, and so on. Specify the range for the parameter. Enter the parameter’s lower bound in the Min field and the parameter’s upper bound in the Max field. For discrete parameters, specify the parameter step in the Step field.
  8. In the Criteria table below, specify calibration criteria. Each criterion is defined in an individual row of the table.
  9. Type the name of the criterion in the Title column.
  10. Choose the type of criterion in the Type cell. Two types of criteria are available: scalar for fitting single values and dataset to fit datasets.
  11. In the Expression field, specify the dataset, or a scalar value that will store the simulation output data. Use root here to refer to the top-level agent, for example, root.myDataset.
  12. In the Observed cell, specify the name of the dataset, or an expression that defines the data that will be used as a given (historical) pattern. You will be able to define the observed datasets on the next page.
  13. In the Coefficient column, specify coefficients to combine and balance multiple criteria.
  14. Click Finish.

The calibration progress and fitting of each criterion are displayed in the default UI.

You may also consider selecting a different optimization engine in the experiment’s properties.

Properties

General

Name — The name of the experiment.

Since AnyLogic generates a Java class for each experiment, please follow Java naming guidelines and start the name with an uppercase letter.

Ignore — If selected, the experiment is excluded from the model.

Top-level agent — Using the drop-down list, choose the top-level agent type for the experiment. The agent of this type will play a role of a root for the hierarchical tree of agents in your model.

Optimization engine — The optimization engine used by the experiment: OptQuest or Genetic.

Objective — The objective function you want to minimize or maximize. The top-level agent is accessible here as root.

Number of iterations — If selected, calibration will be stopped, if the maximum number of simulations, specified in the field to the right, is exceeded.

The Genetic optimization engine does not support the Infinite number of iterations.

Automatic stop — If selected, calibration will be stopped, if the value of the objective function stops improving significantly (this option is named optimization autostop).
If the Genetic optimization engine is used, the Automatic stop property becomes invisible: it is always on due to the technical specifics.

Maximum available memory — The maximum size of the Java heap allocated for the model.

Create default UI — The button creates the default UI for the experiment.

Do not click this button since it will delete the experiment UI created by the wizard and will create the default UI for the calibration experiment that may not correspond to your task.
In case you modify any properties of the experiment, it is recommended to re-create the default UI so it would reflect the changes you have made.
Parameters

Parameters — Defines the set of optimization parameters (also known as decision variables). The table lists all the parameters of the top-level agent.
To make a parameter a decision variable, click in the Type field and choose the type of the optimization parameter other than fixed.
Depending on the type of the parameter, the list of possible values may vary: discrete for int, continuous and discrete for double, and so on.
Specify the range for the parameter. Enter the parameter’s lower bound in the Min field and the parameter’s upper bound in the Max field. For discrete parameters, specify the increment value in the Step field.

Model time

Stop — Defines whether the model will Stop at specified time, Stop at specified date, or it will Never stop. In the first two cases, the stop time is specified using the Stop time/Stop date controls.

Start time — The initial time for the simulation time horizon.

Start date — The initial calendar date for the simulation time horizon.

Stop time — [Enabled if Stop is set to Stop at specified time] The final time for the simulation time horizon (the number of model time units for the model to run before it will be stopped).

Stop date — [Enabled if Stop is set to Stop at specified date] The initial calendar date for the simulation time horizon.

Additional optimization stop conditions — Here you can define any number of additional optimization stop conditions. When any of these conditions will be evaluated to true, optimization will be stopped. A condition can include checks of dataset mean confidence, variable values, and so on. The top-level agent of the experiment can be accessed here as root, so if you want, for example, to stop the optimization when the variable var of the experiment’s top-level agent steps over the threshold, type root.var > 11.
To make the condition active, select the checkbox in the corresponding row of the table.

Constraints

Defines the constraints — additional restrictions imposed on the optimization parameters.

Constraints on simulation parameters (are tested before a simulation run) — The table that defines the optimization constraints. A constraint is a condition defined upon optimization parameters. It defines a range for an optimization parameter. Each time the optimization engine generates a new set of values for the optimization parameters, it creates feasible solutions, satisfying this constraint; thus the space for searching is reduced, and the optimization is performed faster.
A constraint is a well-formed arithmetic expression describing a relationship between the optimization parameters. It always defines a limitation by specifying a lower or an upper bound, for example, parameter1 >= 10. Constraints are calculated before the model run and instantiation of the top-level agent, so they cannot involve any of the top-level agent’s contents.
Each constraint is defined in an individual row of the table and can be disabled by deselecting the corresponding checkbox in the first column.

Requirements

Defines the requirements — additional restrictions imposed on the solutions found by the optimization engine.

Requirements (are tested after a simulation run to determine whether the solution is feasible) — The table defining the optimization requirements. A requirement is an additional restriction imposed on the solution found by the optimization engine. Requirements are checked at the end of each simulation, and if they are not met, the current parameter values are rejected.
A requirement can also be a restriction on a response that requires its value to fall within a specified range. It may contain any variables, parameters, functions, etc. of the experiment’s top-level agent accessible in the expression field as root.
Each requirement is defined in an individual row of the table and can be disabled by deselecting the corresponding checkbox in the first column.

Randomness

Random number generator — Here you specify, whether you want to initialize a random number generator for this model randomly or with some fixed seed. This makes sense for stochastic models. Stochastic models require a random seed value for the pseudorandom number generator. In this case model runs cannot be reproduced since the model random number generator is initialized with different values for each model run. Specifying the fixed seed value, you initialize the model random number generator with the same value for each model run, thus the model runs are reproducible. Moreover, here you can substitute AnyLogic default RNG with your own RNG.

  • Random seed (unique simulation runs) — If selected, the seed value of the random number generator is random. In this case, the random number generator is initialized with the same value for each model run, and the model runs are unique (non-reproducible).
  • Fixed seed (reproducible simulation runs) — If selected, the seed value of the random number generator is fixed (specify it in the Seed value field). In this case, the random number generator is initialized with the same value for each model run, and the model runs are reproducible.
  • Custom generator (subclass of Random) — If for any reason you are not satisfied with the quality of the default random number generator Random, you can substitute it with your own one. Just prepare your custom RNG (it should be a subclass of the Java class Random, for example, MyRandom), choose this particular option, and type the expression returning an instance of your RNG in the field on the right, for example, new MyRandom() or new MyRandom( 1234 ).
    You can find more information in Custom number generator.
Replications

Use replications — if selected, the optimization engine will run several replications per simulation. You need this when your model contains stochastics. In such cases, the results of the simulation runs are unique and the values of the optimized function which are obtained for the simulation runs executed with the same values of optimization parameters most likely differ from each other. We cannot execute only one simulation run, accept its results as the current iteration result, and proceed with optimization by checking other parameter values. To obtain reliable representative data, we need to execute several runs (called “replications” here) for a single set of parameter values and accept the mean of all replications results as the values of the objective.

Fixed number of replications — if selected, a fixed number of replications will be run per each simulation.

  • Replications per iteration — [enabled if Fixed number of replications is set] The fixed number of replications, which will be run per each simulation.

Varying number of replications (stop replications after minimum replications, when confidence level is reached) — If selected, a varying number of replications will be run per each simulation. When running a varying number of replications, you will specify the minimum and the maximum number of replications to be run. The OptQuest engine will always run the minimum number of replications for a solution. OptQuest then determines if more replications are needed. The OptQuest engine stops evaluating a solution when one of the following occurs:

  • The true objective value is within a given percentage of the mean of the replications to date.
  • The current replication objective value is not converging.
  • The maximum number of replications has been run.
The Genetic optimization engine does not support Varying number of replications.

For this property, the following options are available:

  • Minimum replications — [enabled if Varying number of replications is set] The minimum number of replications the OptQuest engine will always run per one simulation.
  • Maximum replications — [enabled if Varying number of replications is set] The maximum number of replications the OptQuest engine can run per one simulation.
  • Confidence level — [enabled if Varying number of replications is set] — The confidence level to be evaluated for the objective. The confidence level displays the probability of a random result placing within the confidence interval. You can look at it as the sample’s accuracy. As a rule, 95% is used, but in cases where high accuracy is not needed, you can use 90% and even 85%. Vice versa, the larger is the sample, the higher accuracy can be established. The confidence interval should be seen as a measure of inaccuracy. It defines the reach of the distribution curve on both sides of the selected point where the results may fall.
  • Error percent — [enabled if Varying number of replications is set] — A value from 0 to 1, which defines the size of the confidence interval, that should satisfy us as the condition to stop executing additional replications for the current iteration. The interval is calculated as (current_mean_value - current-mean-value * error percent, current mean value + current mean value * error percent).
Window

Window properties define the appearance of the model window that will be shown when the user starts the experiment.

The size of the experiment window is defined using the model frame and applies to all experiments and agent types of the model.

Title — The title of the model window.

Enable zoom and panning — If selected, the user will be allowed to pan and zoom the model window.

Enable developer panel — Select/clear the checkbox to enable/disable the developer panel in the model window.

Show developer panel on start — [Enabled only if the Enable developer panel checkbox is selected] If selected, the developer panel will be shown by default in the model window every time you run the experiment.

Java actions

Initial experiment setup — The code that is executed on the experiment setup.

Before each experiment run — The code that is executed before each simulation run.

Before simulation run — The code that is executed before the simulation run. This code is run on the setup of the model. At this moment the top-level agent of the model is already created, but the model is not started yet. You may perform here some actions with elements of the top-level agent, e.g assign actual parameter values here.

After simulation run — The code that is executed after the simulation run. This code is executed when the simulation engine finishes the model execution (the Engine.finished() function is called). This code is not executed when you stop your model by clicking the Terminate execution button.

After iteration — The code that is executed after the iteration run.

After experiment — The code that is executed after the experiment run.

Advanced Java

Imports section — import statements needed for correct compilation of the experiment class’ code. When Java code is generated, these statements are inserted before the definition of the Java class.

Additional class code — Arbitrary member variables, nested classes, constants, and methods are defined here. This code will be inserted into the experiment class definition. You can access these class data members anywhere within this experiment.

Java machine arguments — Specify here the Java machine arguments you want to apply on launching your model. You can find detailed information on possible arguments at Java Sun Microsystems website: http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html.

Command-line arguments — Here you can specify command-line arguments you want to pass to your model. You can get the values of passed argument values using the String[] getCommandLineArguments() method from any code field of your choice. The only exception is the values of static variables since these are initialized before the experiment class itself.

Advanced

Allow parallel evaluations — If the option is selected and the processor has several cores, AnyLogic will run several experiment iterations in parallel on different processor cores. Thereby performance is multiply increased and the experiment is performed significantly quicker. This feature is made controllable because in some rare cases, parallel evaluations may affect the optimizer strategy so that more iterations are required to find the optimal solution.
Do not use static variables, collections, table functions, and custom distributions (check that their advanced option Static is deselected), if you turn on parallel evaluations here.

Load top-level agent from snapshot — If selected, the experiment will load the model state from the snapshot file specified in the control to the right. The experiment will be started from the time when the model state was saved.

Functions

You can use the following functions to control the experiment, retrieve the data on its execution status and use it as a framework for creating custom experiment UI.

Controlling execution
Function Description
void run() Starts the experiment execution from the current state. If the model does not exist yet, the function resets the experiment, creates, and starts the model.
void pause() Pauses the experiment execution.
void step() Performs one step of experiment execution. If the model does not exist yet, the function resets the experiment, creates, and starts the model.
void stop() Terminates the experiment execution.
void close()

This function returns immediately and performs the following actions in a separate thread:

  • Stops experiment if it is not stopped,
  • Destroys the model,
  • Closes the experiment window (only if the model is started in the application mode).
Experiment.State getState() Returns the current state of the experiment: IDLE, PAUSED, RUNNING, FINISHED, ERROR, or PLEASE_WAIT.
double getRunTimeSeconds() Returns the duration of the experiment execution in seconds, excluding pause times.
int getRunCount() Returns the number of the current simulation run, i.e., the number of times the model was destroyed.
double getProgress() Returns the progress of the experiment: a number between 0 and 1 corresponding to the currently completed part of the experiment (a proportion of completed iterations of the total number of iterations), or -1 if the progress cannot be calculated.
int getParallelEvaluatorsCount() Returns the number of parallel evaluators used in this experiment. On multicore / multiprocessor systems that allow parallel execution, this number may be greater than 1.
Objective
Function Description
double getCurrentObjectiveValue() Returns the value of the objective function for the current solution.
double getBestObjectiveValue() Returns the value of the objective function for the optimal currently found solution.
The solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.
double getSelectedNthBestObjectiveValue() Returns the objective value for the Nth best solution identified by the selectNthBestSolution(int) function.
The Genetic optimization engine does not support this function.
Solution
Function Description
boolean isBestSolutionFeasible() Returns true if the optimal solution satisfies all constraints and requirements; returns false otherwise.
boolean isCurrentSolutionBest() Returns true if the solution is currently the optimal one; returns false otherwise.
boolean isCurrentSolutionFeasible() Returns true if the current solution satisfies all constraints and requirements; returns false otherwise.
boolean isSelectedNthBestSolutionFeasible() Returns true if the Nth best solution satisfies all constraints and requirements; returns false otherwise.
The Genetic optimization engine does not support this function.
void selectNthBestSolution (int bestSolutionIndex) This function locates the Nth best solution and sets up the data for subsequent function calls that retrieve specific pieces of information (for example, for the getSelectedNthBestObjectiveValue() and getSelectedNthBestParamValue(COptQuestVariable) functions).

bestSolutionIndex — the index of the optimal solution (passing 1 will locate the optimal solution, 2 — the second optimal, and so on).
The Genetic optimization engine does not support this function.
Optimization parameters
Function Description
double getCurrentParamValue (COptQuestVariable optimizationParameterVariable) Returns the value of the given optimization parameter variable for the current solution.
The Genetic optimization engine does not support this function.
double getBestParamValue (COptQuestVariable optimizationParameterVariable) Returns the value of the given optimization parameter variable for the optimal currently found solution.
The solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.
The Genetic optimization engine does not support this function.
double getSelectedNthBestParamValue (COptQuestVariable optimizationParameterVariable) Returns the value of the variable for the Nth best solution identified by calling the selectNthBestSolution(int) function.
The Genetic optimization engine does not support this function.
Iterations
Function Description
int getCurrentIteration() Returns the current value of the iteration counter.
int getBestIteration() Returns the iteration that resulted in the optimal currently found solution.
The solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.
int getMaximumIterations() Returns the total number of iterations.
int getNumberOfCompletedIterations() Returns the number of completed iterations.
int getSelectedNthBestIteration() Returns the iteration number for the Nth best solution identified by the selectNthBestSolution(int) function.
The Genetic optimization engine does not support this function.
Replications

Before calling the calibration experiment functions you may need to ensure that replications are used (call the isUseReplications() function)

Function Description
boolean isUseReplications() Returns true if the experiment uses replications; returns false otherwise.
int getCurrentReplication() Returns the number of replications run so far for the currently evaluated solution.
int getBestReplicationsNumber() Returns the number of replications that were run to get the optimal solution.
The solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.
int getSelectedNthBestReplicationsNumber() Returns the number of replications for the Nth best solution identified by the function selectNthBestSolution(int).
The Genetic optimization engine does not support this function.
Accessing the model
Function Description
Engine getEngine() Returns the engine executing the model. To access the model’s top-level agent (typically, Main), call getEngine().getRoot();
IExperimentHost getExperimentHost() Returns the experiment host object of the model, or some dummy object without functionality if the host object does not exist.
Restoring the model state from the snapshot
Function Description
void setLoadRootFromSnapshot(String snapshotFileName) Tells the simulation experiment to load the top-level agent from the AnyLogic snapshot file. This function is only available in AnyLogic Professional.

snapshotFileName — the name of the AnyLogic snapshot file, for example: "C:\My Model.als"
boolean isLoadRootFromSnapshot() Returns true if the experiment is configured to start the simulation from the state loaded from the snapshot file; returns false otherwise.
String getSnapshotFileName() Returns the name of the snapshot file, from which this experiment is configured to start the simulation.
Error handling
Function Description
RuntimeException error(Throwable cause, String errorText) Signals an error during the model run by throwing a RuntimeException with errorText preceded by the agent’s full name. This function never returns, it throws a runtime exception by itself. The return type is defined for the cases when you would like to use the following form of call: throw error(<my message>);

cause — the cause (which will be saved for a more detailed message), may be null.
errorText — the text describing the error that will be displayed.
RuntimeException errorInModel(Throwable cause, String errorText) Signals a model logic error during the model run by throwing a ModelException with specified error text preceded by the agent’s full name. This function never returns, it throws a runtime exception by itself. The return type is defined for the cases when you would like to use the following form of call: throw errorInModel(<my message>);. This function differs from error() in the way of displaying the error message: model logic errors are “softer” than other errors, they use to happen in the models and signal to the model developer that the model might need some parameter adjustments.
Examples: “agent was unable to leave flowchart block because subsequent block was busy”, “insufficient capacity of pallet rack”, etc.

cause — the cause (which will be saved for a more detailed message), may be null.
errorText — the text describing the error that will be displayed.
void onError(Throwable error) This function may be overridden to perform custom handling of the errors that occurred during the model execution (i.e., errors in the action code of events, dynamic events, transitions, entry/exit codes of states, formulas, etc.). By default, this function does nothing as its definition is empty. To override it, you can add a function to the experiment, name it onError and define a single argument of the java.lang.Throwable for it.

error — an error that has occurred during event execution.
void onError(Throwable error, Agent root) Similar to onError(Throwable error) function except that it provides one more argument to access the top-level (root) agent of the model.

error — an error that has occurred during event execution.
root — the top-level (root) agent of the model. Useful for experiments with multiple runs executed in parallel. May be null in some cases (e.g. on errors during top-level agent creation).
Command-line arguments
Function Description
String[] getCommandLineArguments() Returns an array of command-line arguments passed to this experiment on model start. Never returns null: if no arguments are passed, an empty array is returned.
Cannot be called from within a value of a static variable: these are initialized before the experiment class itself.
How can we improve this article?