AnyLogic
Expand
Font size

Private Cloud: Architecture

This article describes the architecture of AnyLogic Private Cloud, its components, and execution specifics.

Architecture: Diagram

What follows is a scheme of Private Cloud components and Docker containers, showing how they interact with each other.

Components overview

Each Private Cloud instance, regardless of the edition you are using, consists of the following components:

Name Description Used ports
controller A principal component of Private Cloud. Stores configuration data of the whole instance in the JSON format. Acts as the service discovery provider for the instance components. Controls auxiliary services via SSH. 22 — The controller component connects to all other components using this port to start them and work with their files.
9000 — Service discovery, configuration, and orchestration of containers.
8443 — Interaction with Team License Server.
frontend Provides instance users with the web UI and acts as a reverse proxy for the rest component and storage for public files of fileserver. Also provides means to visualize the SVG animations of running experiments. 80 and 443 — The web interface of the Cloud instance.
rest A key service that provides an API for operations performed from the front-end side. It allows for uploading models from AnyLogic desktop applications and provides means to execute user-side commands (launching experiments, collecting data for graphical charts, and so on). 9101 — Exposing the REST API to users, the frontend component, and the AnyLogic desktop installation, thus allowing for managing models, experiments, and other instance entities.
fileserver Serves as a storage for files. Among those are model files (sources, model and library JARs, additional files), miscellaneous files (user avatars), executor connectors (service files that provide means for executor components to communicate with a model), and so on. 9050 — Submitting files and resources.
postgres Runs 2 PostgreSQL databases. The first one stores meta-information (user accounts, model metadata, and so on), while the second stores model run data and statistics. 5432 — Submitting the instance metadata (user profiles, models, versions, and so on).
statistics Collects data about user sessions and experiment runs. It has an HTTP interface independent of databases, which other services use to interact with this component. 9103 — Submitting performance and run statistics to store them in its internal database.
experiment-results Provides a way for requesting the aggregate components from multi-run experiments, collecting the data, and giving an output per the user’s or web UI’s request. 9102 — Submitting multi-run experiment results in the cassandra database.
cassandra A NoSQL database for input-output pairs of all model versions in Private Cloud that stores run results and run configurations. 9042 — Submitting the information about experiment run results.
rabbitmq A messaging queue service, responsible for internal communication between AnyLogic Private Cloud components in the context of handling model run tasks. 5672 — Transferring tasks being currently processed between other service components.
dynproxy Acts as an HTTP reverse proxy for SVG animation requests addressed to the executor nodes. 9080 — Transferring sessions between the frontend service component and the executed model.
balancer Distributes model run tasks between executor components. Depending on settings can also start these components whenever the need arises. 9200 — Submitting tasks to other service components in the Cloud instance.
executor-multi-run Controls execution of multi-run experiments and generates inputs for them. Implicitly splits multi-run experiments to standalone model runs taking the experiment type into account — for example, by generating varied parameter inputs for the parameter variation experiments. 9202 — Splitting multi-run experiments into single-run tasks, then submitting them to the balancer service component.
executor Executes the model runs: simulation experiments, animations, single-run fractions of multi-run experiments are all performed by the executor components.
One Private Cloud installation can have multiple executor components. To take advantage of that option, you need to have a corresponding edition of Private Cloud.
9201 — Accepts model run tasks and performs single-run experiments.
registry The Docker registry for containers of Cloud service components. The controller component uses registry to start other components. 5000 — Interaction between containers.
In addition to the ports listed above, the 80 and 443 ports used by the HTTP and HTTPS protocols respectively should be open for users to be able to access the Cloud instance.

How Cloud runs models

All internal Private Cloud tasks that involve the models’ execution have a “tag” (an application label), assigned to them by the rest and executor-multi-run components. The assignment procedure takes into account the logical group of the task.

As of now, experiment tasks are split into 3 groups:

  • Simulation tasks represent a single model run
  • Animation tasks — calculations that are required to display model run animations on the front-end side properly
  • Multi-run tasks (the collections of single-run tasks)

The balancer component reads this tag to identify which component should complete the task and split the load accordingly. Say, if the tag says executor, so it would be passed to one of the executor components, tasks marked with executor-multi-run will proceed to the executor-multi-run component, and so on.

When balancer receives a task, it requests the list of nodes that have the same tag as the received task, from the controller component. After that balancer passes the task to the node in several steps:

  1. balancer retrieves information about the capacity of the suitable nodes.
    Each executor has a specific number of tasks it can process simultaneously that corresponds to the number of CPU cores on the node — the machine that runs the executor Docker container.
  2. The nodes that are fully loaded are filtered out.
  3. The rest of the nodes are sorted by the current load in descending order.
  4. balancer starts sending tasks to the first node in the list.
    • If for some reason the node doesn’t accept the task, balancer will attempt to send it to the second node in the list, and so on.
    • If none of the nodes accepts a task, the task will be added to the special RabbitMQ queue that consists of “softly rejected” tasks. balancer checks this queue periodically and tries to relaunch the tasks hanging there.
      A delay time between balancer checks for softly rejected tasks is slightly greater than between balancer checks for new tasks.
    • If a task gets rejected too many times, it will be sent to another queue that consists of “hard rejected” tasks. The delay time between checks is even greater there.
    • After several unsuccessful attempts to pass the task from the “hard rejected” queue, the task will be dropped.

Private Cloud security

Security-wise, it is implied that the machines running Private Cloud components are located in a private network, and regular users of the instance are unable to access them.

For a regular user, a gateway to all interactions with Private Cloud is a machine running the frontend component. All API operations and user-level interactions are passed through an NGINX proxy, controlled by frontend, so no other components should be visible or accessible to users.

How can we improve this article?