This article describes the architecture of AnyLogic Private Cloud, its components, and execution specifics.
- To learn more about Private Cloud in general, see AnyLogic Private Cloud.
- To learn how to install Private Cloud, see Installing Private Cloud.
- To get insights about adjustable configuration files, see Private Cloud: Configuration files.
What follows is a scheme of Private Cloud components and Docker containers, showing how they interact with each other.
Each Private Cloud instance, regardless of the edition you are using, consists of the following components:
- controller — A principal component of Private Cloud. Stores configuration data of the whole instance in the JSON format, acts as the service discovery provider for the instance components, controls auxiliary services via SSH.
- frontend — Provides instance users with the web UI and acts as a reverse proxy for the rest component and storage for public files of fileserver. Also provides means to visualize the SVG animations of running experiments.
- rest — A key service that provides an API for operations performed from the front-end side. It allows for uploading models from AnyLogic desktop applications and provides means to execute user-side commands (launching experiments, collecting data for graphical charts, and so on).
- fileserver — Serves as a storage for files. Among those are model files (sources, model and library JARs, additional files), miscellaneous files (user avatars), executor connectors (service files that provide means for executor components to communicate with a model), and so on.
- postgres — Runs 2 PostgreSQL databases. The first one stores meta-information (user accounts, model metadata, and so on), while the second stores model run data and statistics.
- statistics — Collects data about user sessions and experiment runs. It has an HTTP interface independent of databases, which other services use to interact with this component.
- experiment-results — Provides a way for requesting the aggregate components from multi-run experiments, collecting the data, and giving an output per the user’s or web UI’s request.
- cassandra — A NoSQL database for input-output pairs of all model versions in Private Cloud that stores run results and run configurations.
- rabbitmq — A messaging queue service, responsible for internal communication between AnyLogic Private Cloud components in the context of handling model run tasks.
- dynproxy — Acts as an HTTP reverse proxy for SVG animation requests addressed to the executor nodes.
- balancer — Distributes model run tasks between executor components. Depending on settings, can also start these components whenever the need arises.
- executor-multi-run — Controls execution of multi-run experiments and generates inputs for them. Implicitly splits multi-run experiment to standalone model runs taking the experiment type into account — for example, by generating varied parameter inputs for the parameter variation experiments.
executor — Executes the model runs: simulation experiments, animations, single run fractions of multi-run experiments are all performed by the executor components.
One Private Cloud installation can have multiple executor components. To take advantage of that option, you need to have a corresponding edition of Private Cloud.
All internal Private Cloud tasks that involve the models’ execution have a “tag” (an application label), assigned to them by the rest and executor-multi-run components. The assignment procedure takes into account the logical group of the task.
As of now, experiment tasks are split into 3 groups:
- Simulation tasks represent a single model run
- Animation tasks — calculations that are required to display model run animations on the front-end side properly
- Multi-run tasks (the collections of single-run tasks)
The balancer component reads this tag to identify which component should complete the task and split the load accordingly. Say, if the tag says executor, so it would be passed to one of the executor components, tasks marked with executor-multi-run will proceed to the executor-multi-run component, and so on.
When balancer receives a task, it requests the list of nodes that have the same tag as the received task, from the controller component. After that balancer passes the task to the node in several steps:
balancer retrieves information about the capacity of the suitable nodes.
Each executor has a specific number of tasks it can process simultaneously that corresponds to the number of CPU cores on the node — the machine that runs the executor Docker container.
- The nodes that are fully loaded are filtered out.
- The rest of the nodes is sorted by the current load in descending order.
balancer starts sending tasks to the first node in the list.
- If for some reason the node doesn’t accept the task, balancer will attempt to send it to the second node in the list, and so on.
If none of nodes accepts a task, the task will be added to the special RabbitMQ queue that consists of "softly rejected" tasks. balancer checks this queue periodically and tries to relaunch the tasks hanging there.
A delay time between balancer checks for softly rejected tasks is slightly greater than between balancer checks for new tasks.
- If a task gets rejected too many times, it will be sent to another queue that consists of "hard rejected" tasks. A delay time between checks is even greater there.
- After several unsuccessful attempts to pass the task from the "hard rejected" queue, the task will be dropped.
Security-wise, it is implied that the machines running Private Cloud components are located in a private network, and regular users of the instance are unable to access them.
For a regular user, a gateway to all interactions with Private Cloud is a machine running the frontend component. All API operations and user-level interactions are passed through an NGINX proxy, controlled by frontend, so no other components should be visible or accessible to users.
How can we improve this article?