FMIGo! logo

Introduction

FMIGo! is a set of Free Software tools aimed at the Functional Mockup Interface (FMI) standard. FMI is a standard for packaging simulations into .zip files called Functional Mockup Units (FMUs). Each FMU contains an XML file describing the system contained within, and a set of binaries, data and possibly source code.

What FMIGo! does is provide a backend for running FMUs, across one or more computers, using TCP/IP (ZeroMQ) or MPI for communication. It also provides some tools for generating FMUs, tools for dealing with the backend's output data, and a tool for running one or more FMUs packaged into so-called SSPs (.zip files conforming to the System Structure and Parameterization of Components for Virtual System Design standard).

Why FMI?

One of the reasons FMI exists is so that different simulation authoring tools can output a single standardized format. For example MSC ADAMS, Simulink and AGX Dynamics all support FMI output. This means they can all be executed by tools like PyFMI, DACCOSIM or FMIGo!.

It is possible to create FMUs outside the tools mentioned above, using for example QTronic's FMU SDK, which is licensed under the BSD license.

If co-simulating two or more FMUs, one must typically arrange for some kind of coupling between the FMUs. Not all FMI tools do this, instead relying on FMU authors having sufficient know-how.

Why SSP?

When dealing with FMUs and their parameters, it is convenient to package them into a ready-to-execute form. SSP provides a way to do such packaging. SSP is also useful for archiving simulations, preventing bit-rot.

SSP also provides a way to rename connectors to/from FMUs. This is very useful when dealing with FMUs coming from different teams or companies, who will often use different naming conventions and units on inputs and outputs.

Finally, SSP is extensible. Vendors can provide extra features in various ways, and provide XML schemas for these extensions. FMIGo! uses this property to make it possible to mark up 1-dimensional kinematic constraints between FMUs, and for providing program arguments to the execution backend.

What is FMIGo‽

As mentioned in the introduction, FMIGo! is a set of tools for dealing with the FMI standard. The main components are:

fmigo‑mpi
fmigo‑master
fmigo‑server
Execution backend
ssp‑launcher.py Python script for parsing and launching SSPs
pygo Python library for abstracting and connecting FMUs, and for dealing with output data from the execution backend
wrapper A set of CMake and Python scripts plus C code for converting ModelExchange FMUs to Co-Simulation FMUs
cgsl Small library simplifying how we deal with the GNU Scientific Library (GSL)

Execution backend (fmigo‑*)

The execution backend consists of two sets of binaries: fmigo‑mpi and fmigo‑master/fmigo‑server. fmigo‑mpi is used when communication over MPI is desired, fmigo‑master and fmigo‑server are used when TCP/IP (ZeroMQ) communication is desired.

The backend has the following properties:

Using the execution backend

First off, a word of advice: if you only have a single FMU, you are probably better off using simpler tools such as PyFMI. The primary purpose of FMIGo! is to make it possible to connect two or more FMUs and have such combinations run with reasonable performance without numerically blowing up. With that said we can go on with the rest of this section:

In order for FMIGo! to be of much use, you must pick some method of coupling your simulations. For physical systems FMIGo! provides the SPOOK solver by Claude Lacoursière. Another option is to use the NEPCE method developed by Edo Drenth, which involves adding sinc² filters to FMU outputs and adding stiff springs+dampers to relevant inputs. Some of that work can be automated using our ME→CS FMU wrapper tool. Using special purpose solvers may also be necessary, such as exponential integrators. FMIGo! does not provide this, unless GSL does. On to the example:

You have two FMUs, fmu0.fmu and fmu1.fmu, that you wish to connect with a shaft constraint. By default, shaft constraints are holonomic, meaning the solver will try to keep both angles and angular velocities together. The solver (master) expects to be given references to angle outputs, angular velocity outputs, angular acceleration outputs, and torque inputs. It also expects to be able to request the the partial derivative of angular acceleration wrt torque (mobility aka inverse mass or inverse moment of inertia). Finally, the FMUs must have save/restore functionality (fmi2GetFMUState and friends).

If fmu0.fmu has variables outputs theta1, omega1, alpha1 and tau1, and fmu1.fmu has angle2, angularVelocity2, angularAcceleration2, torque2, then the invocation is:

$ mpiexec -n 3 fmigo-mpi fmu0.fmu fmu1.fmu \
    -C shaft,0,1,\
        theta1,omega1,alpha1,tau1,\
        angle2,angularVelocity2,angularAcceleration2,torque2

The -n option to mpiexec must be the number of FMUs plus one.

Other kinematic constraints are also possible, such as lock constraints, ball constraints and multiway constraints. See the manual for more information about these, and other invocation details.

The output of the backend can be CSV (comma separated values, default), SSV (space separated values) or Matlab .mat files. Column names are "fmu%i_%s" where %i is the FMU ID (zero-based) and %s is the name of the relevant output variable. Only the variables listed in <Outputs> in modelDescription.xml will end up in the output data. In the above example, some output column names might be fmu0_theta1 and fmu1_angle2.

ssp‑launcher.py

ssp‑launcher.py is used for launching SSPs. It supports enough of the SSP standard for our purposes, plus our extensions listed in tools/ssp/FmiGo.xsd.

Using ssp‑launcher.py

Ensure that the fmigo‑* executables are in your $PATH, and invoke ssp‑launcher.py on your SSP:

$ python ssp-launcher.py foo.ssp

Output format is CSV by default.

pygo

pygo consits of some Python classes for abstracting and connecting FMUs (a bit like SSP), and code for converting the output of the backend to HDF5 format. Claude knows more.

wrapper

The wrapper converts ModelExchange FMUs to Co-Simulation FMUs by adding an ODE solver, partial derivatives and optional sinc² filters suitable for NEPCE coupling.

Example invocation, converting ME.fmu into CS.fmu in Release mode:

$ python wrapper.py -t Release ME.fmu CS.fmu

Invoke python wrapper.py --help for full help. The resulting FMUs are subject to the GNU General Public License version 3 (GPLv3).

cgsl

cgsl is used as a convenience library for us, but may be of use for other people. Check out tools/csgl/demo in the source code for an example.

Limitations

Overhead

There is some overhead between simulation steps due to message packing, communication, factoring matrices and computing which values go where. This overhead increases linearly with the number of FMUs, and is higher when using kinematic coupling (SPOOK) compared to weak coupling (such as NEPCE). This may be an issue for systems that need to run at 1 kHz or faster, such as robotics or other hardware-in-the-loop (HIL) systems.

On an Intel® Core™ i7-860 processor with 8 threads running at 2.8 GHz and a system with 7 FMUs we get the following overheads per step: 146 µs when using kinematic coupling, 54 µs when using weak coupling. Keep in mind that kinematic coupling allows the system to take much larger simulation time steps, which results in overall better performance for many systems.

MPI world size / backend network shape

At the moment the size of the MPI world must be the number of FMUs plus one. This because each server only serves a single FMU, and the master is its own node. The situation is similar when using TCP/IP (ZMQ) communication.

This MPI world / network shape increases overhead compared to using OpenMP or pthreads for communicating between FMUs running on the same CPU. Ideally the world size would be exactly the same as the ideal number of CPUs required for running all FMUs plus the solver. Getting that right is somewhat complicated, which is why we've left it out for now.

Going to a federated system is perhaps an even better way to deal with this problem. This is something we have in mind for a potential continuation of the project.

Authoring tools

FMIGo! has very little in the form of authoring tools. There are some command-line tools to make FMU authoring a bit easier, but it is still a bit awkward. We felt that developing GUI tools was outside the scope of this project. There are commercial endeavours in this direction, especially in the context of SSP.

Security considerations

The FMIGo! tools assume that the underlying infrastructure can be trusted. Specifically, we do nothing to deal with the following issues:

In other words, if you intend to connect FMIGo! to the Internet, and possibly accept and execute FMUs from the wild, then you should jail and firewall the entire backend. The ZeroMQ control port is somewhat safe, since it is only used for pausing/unpausing simulations and retreiving results. This assumes both protobuf and ZeroMQ have been thoroughly tested and are immune to malicious input. As always, no warranty is provided by us if something goes horribly wrong on your end.

License

FMIGo! itself is licensed under the MIT license. The GNU Scientific Library (GSL) is licensed under the GNU General Public License version 3 (GPLv3), and is required for FMIGo! to be able to solve algebraic loops during initialization. The user therefore has the choice of two license options: enable loop solving (GPL, default) or disable loop solving (MIT). To build without GPL, you must give cmake the option -DUSE_GPL=OFF. GSL is also required for wrapping ME FMUs into CS FMUs.

For future versions we may consider other license options. In order to guarantee that improvements to FMIGo! are never locked away behind a cloud we may opt for something that fills the gap between the GPL and the GNU Affero General Public License (AGPL). Our current reading of the AGPL is that it is too strict for our needs.

Source code

The source code is currently hosted at GitHub. Anonymous access is possible via Git over HTTPS:

git clone https://github.com/Tjoppen/fmigo.git

The code is built periodically via GitLab CI, for the following x86 platforms:

Windows is currently not supported due to lack of a suitable CI server. It may still work just fine, we just don't guarantee it at present.

GSL (and thus loop solving) is currently disabled for 64-bit Windows builds. The reason for this is the lack of decent package managers on that platform. For now we are relying on a 32-bit manual build of GSL for Windows, which has not been reproduced due to much headache.

Some users have successfully built the system on Arch Linux, and on Mac OS X.

We do not provide any official builds for download currently.

News

2023-11-02

Added support for Ubuntu 23.10 (Mantic Minotaur) and Debian 12 (Bookworm). Removed support for Ubuntu 22.10 (Kinetic Kudu) and Debian 9 (Stretch) since the official mirrors no longer list these releases, thus apt cannot download any packages for them. Minimum CMake version bumped to 2.8.12.

Windows builds have not been done for quite some time, so we cannot at present guarantee Windows support. One way to improve this is to cross-compile using mingw-w64 on Debian. There is still a need to test on Windows, which we lack the resources for at present.

2023-03-21

Added support for Ubuntu 22.10 (Kinetic Kudu). We're keeping on top of things, but expect support to be dropped in favor of the next LTS release when it comes out.

2022-07-01

Added support for Ubuntu 22.04 LTS (Jammy Jellyfish).

2022-02-12

Moved repository to GitHub.

2021-08-16

Added support for Debian 11 (Bullseye).

2021-03-07

Added support for Ubuntu 20.04 LTS (Focal Fossa). Dropped support for Ubuntu 14.04 LTS (Trusty Tahr) and Ubuntu 16.04 LTS (Xenial Xerus). Dropped support for python2.

2021-02-26

SSL issue fixed, GitLab upgraded. There is currently an issue with python2.7 preventing us from upgrading to Ubuntu 20.04. Hopefully it will be fixed in the coming weeks.

2021-02-09

We are currently having an SSL certificate issue. It is being looked at.

2019-10-19

Added support for Debian 10 (Buster).

2019-10-06

Ubuntu 14.04 LTS (Trusty Tahr) build fixed.

2019-07-19

Site was down for a few days due to server misconfiguration. Fixed now.

2019-03-13

Added support for Ubuntu 18.04 LTS (Bionic Beaver) and Debian 9 (Stretch).

2019-03-10

Final report added to references, available from the CS department here (mirror here).

2018-11-07

Domain fmigo.net registered, site published at http://www.fmigo.net/.

Added a subsection on MPI world size.

2018-11-02

First draft of the site published.

Contact

What?Who?E-mail
Math, pygo and Arch questions Claude Lacoursière claude at hpc2n.umu.se
Most questions about the code Tomas Härdin fmigo at haerdin.se

Links

References