AnyLogic
Expand
Font size

Reproducible model runs

To make model runs reproducible

  1. In the Projects view, select the experiment.
  2. In the Properties view, open the Randomness section.
  3. Select the Fixed seed (reproducible model runs) option.
  4. Specify the seed value for the random number generator in the Seed value field on the right.
Demo model: Reproducible Experiment with Stochastic Process Model Open the model page in AnyLogic Cloud. There you can run the model or download it (by clicking Model source files). Demo model: Reproducible Experiment with Stochastic Process ModelOpen the model in your AnyLogic desktop installation.
Any changes you make to your model will affect the random number generator. For example, if you add a block to the graphical editor, that block will use the numbers from the random number generator when it is initialized, even if it is not connected to the flowchart.

If you have not made any such changes in the model and you run the model with the Fixed seed of the random number generator and the results are not reproducible, check your model using the lists below.

For all experiments:

  • If your model contains transporters with free space navigation or pedestrians, AnyLogic uses multithreaded calculations to improve the model performance. You should disable multithreading by setting the Number of processors for parallel execution runtime preference to 1.
  • The model should not contain HashMap or HashSet collections. Use LinkedHashMap.
  • Do not use System.currentTimeMillis(), new Date(), and similar functions that return the current system time.
  • Results are reproducible only in case of a complete model restart (model window is closed), but multiple experiments in a row (Start -> Stop -> Start -> …) are different. This means that each model run leaves “garbage” in the user data. For instance, static variables are common for all model runs. If one iteration changes the value, another one would use the changed value.
  • In the case of external data sources, it makes sense to check that the input data is not changed while the model is running: a new iteration will use the modified input data.
  • User-defined Random class (for example, Collections.shuffle() or Math.random()) should not be used in the model. Use the getDefaultRandomNumberGenerator() function to get access to the model random number stream.
  • If conversion from date to model time is used, the model start/stop date should be fixed.
  • Dynamic properties of shapes should not contain any functions that change anything in the model.
  • Model logic should not depend on the properties of animation shapes (position, size, color, and so on).
  • User-defined parallel threads (if any) should be properly synchronized and ordered.
  • The object.hashCode() and System.identityHashCode() functions should not be called in the model.
  • Do not use the == operator to compare texts (String variables and parameters), use the equals() function instead.
  • If the model is launched on different computers and you do not get the same results, ensure that the computers have the same regional settings (locales) set for their operating systems. For example, if your model contains a schedule and the locales have different Daylight Saving Time rules, the results of the experiment may differ.

For experiments with multiple iterations:

  • Experiment should not contain static variables or fields that are changed from within the iteration.
  • The random number generator in a custom experiment should be reset or reinitialized before each new model run.
  • When copying parameter values (for example, from the console or using the Copy best button in the default UI of the optimization and similar experiments) and pasting them into the experiment code, make sure that the numbers retain maximum precision. Ideally, models should not be overly sensitive to small variations in decimal places, but such sensitivity can occasionally occur.
    When using the Copy best button of the optimization and similar experiments, the format() function is applied to the copied value, which may truncate double values. To prevent this, remove the function from the Copy best button action.
  • Optimization experiment with parallel evaluations may produce different results each time. A new set of parameters values is formed based on previous solutions. The number of these solutions at specific moment of time may vary due to different execution speeds of each iteration. Disable parallel evaluations to get the same result each time.
  • If the results of the experiment with multiple iterations do not correspond to the results of the Simulation experiment, it makes sense to check model start/stop time, random seed, and other engine settings.
How can we improve this article?
Send Feedback