AnyLogic
Expand
Font size

Reproducible model runs

To make model runs reproducible

  1. In the Projects view, select the experiment.
  2. In the Properties view, open Randomness section.
  3. Select the option Fixed seed (reproducible model runs).
  4. Specify the seed value of the random number generator in the Seed value field to the right.
Any changes in your model will affect the random number generator, e.g. if you add a block to the graphical editor, then, upon initialization, this block will use the numbers from the random number generator even while remaining disconnected from the flowchart.

If you haven't made any such changes in the model and run the model with the Fixed seed of random number generator, but the results are not reproducible, please check your model according to the list below:

For all experiments:

  • Model should not contain HashMap or HashSet collections. Use LinkedHashMap.
  • Do not use System.currentTimeMillis(), new Date() and similar functions that return current system time.
  • Results are reproducible only in case of complete model restart (model window is closed), but multiple experiments in a row (Start -> Stop -> Start -> …) are different. It means that each model run leaves "garbage" in user data. For instance, static variables are common for all model runs. If one iteration changed the value, another one would use the modified value.
  • In case of external data sources, it makes sense to check that input data is not changed while the model is running: new iteration will use modified input data.
  • User-defined Random class ( e.g. Collections.shuffle() or Math.random()) should not be used in the model. Use getDefaultRandomNumberGenerator() function to get access to the model random numbers stream.
  • If conversion from date to model time is used, model start/stop date should be fixed.
  • Dynamic properties of shapes should not contain any functions that change something in the model.
  • Custom parallel threads (if any) should be correctly synchronized and put in order.
  • object.hashCode() and System.identityHashCode() functions should not be called in the model.
  • Do not use the == operator to compare texts (String variables and parameters), use equals() function instead.
  • If the model is launched on different computers, and you cannot obtain the same results, ensure that the computers have the same regional settings (locales) set for their operating systems. The experiment results can differ if e.g. your model contains a schedule and the locales have different Daylight Saving Time rules.

For experiments with multiple iterations:

  • Experiment should not contain static variables or fields that are changed from within iteration.
  • Random number generator in a custom experiment should be reset or reinitialized before each new model run.
  • Optimization experiment with parallel evaluations may give different results each time. New set of parameters values is formed based on previous solutions. The number of these solutions at specific moment of time can vary because of different execution speed of each iteration. Disable parallel evaluations to get the same result each time.
  • If results of the experiment with multiple iterations do not correspond to results of Simulation experiment, it makes sense to check model start/stop time, random seed, selection mode for simultaneous events and other engine settings.
How can we improve this article?