OpenSplice DDS has two different architectural structures that can be used at deployment time. The first is single process architecture where one or more DDS applications and the related services are grouped into a single operating system process. The other architecture uses federated shared memory where both the DDS related administration, services and DDS applications interface directly with shared memory.
When using shared memory architecture it can be useful to know how much memory is being used and other details statistics. Memory Statistics, also known as MMstat in earlier versions of OpenSplice, is a command line tool which can be used to display valuable information about the shared memory statistics of an OpenSplice domain. When a shared memory OpenSplice process is running, launch Memory Statistics from the OpenSplice Launcher to see current information relating to this memory.
MMstat
Information is represented in bytes, here there are 8.978536MB of memory available.
When launched from command line several mmstat parameters can be configured, see starting OpenSplice tools from the command line for more information about launching mmstat from command line.
Starting Memory Statistics from Command Line
Memory statistics can be started using the command:
$ mmstat
MMstat arguments
Usage:
mmstat -h
mmstat [-M|m] [-e] [-a] [-i interval] [-s sample_count] [URI]
mmstat [-t|T] [-i interval] [-s sample_count] [-l limit] [-o C|S|T] [-n nrEntries] [-f filter_expression] [URI]Show the memory statistics of the OpenSplice system identified by the specified URI. If no URI is specified, the environment variable OSPL_URI will be used. If the environment variable is not set either, the default domain will be selected. The default display interval is 3 seconds.
Mode:
-m Show memory statistics (default mode)
-M Show memory statistics difference
-t Show meta object references
-T Show meta object references differenceOptions:
-h Show this help
-e Extended mode, shows bar for allocated memory
-a Show pre-allocated memory as well.
-i <interval> Display interval (in milliseconds)
-s <sample_count> Stop after sample_count samples
-l <limit> Show only object count >= limit
-o <C|S|T> Order by object[C]ount/object[S]ize/[T]otalSize
-n <nrEntries> Display only the top nrEntries items (useful only in combination with ordering)
-f <filter_expression> Show only meta objects which name passes the filter expressionUse ‘q‘ to terminate the monitor
Use ‘t‘ to immediately show statistics
The output of mmstat can be put to good use for analysing the behavior of the system and shared memory usage. However it reveals a lot of internal information and there are no guarantees on consistent naming of types between OpenSplice versions. There is also no guaranteed relation to the actual number of entities (e.g. due to caching, deferred garbage collection, lazy cleanup). Nonetheless given the proper constraints, the following information should help gaining a more in-depth understanding of the types visible in output from a fairly recent version of OpenSplice.
View statistics relating to readers and writers for a specific topic.
- Writers are listed as “MAP<v_writerInstance<v_writerSample<TopicName>>,keyExpression>”
- Readers are listed as “MAP<v_indexKeyInstance<v_indexSample<TopicName>,key>,keyExpression>”
View the total number of registered instances on all writers.
- This is not deductible from the output. The total number of registered instances on all writers in the monitored shared memory is “v_writerInstance<v_writerSample<TopicName>>”.
Check if all the samples are released from the shared memory.
- The actual payload of a message (which is available only once in shared memory) is the “v_message<TopicType>”. The different stores that can contain this sample (e.g. readers, writers, persistent store) will have a bit of associated administration (v_indexSample<TopicName>, v_writerSample<TopicName> etc.). When there are absolutely no allocated samples for a given topic, the count for “v_message<TopicType>” should be zero. However as mentioned above this is difficult to analyse. Typically it is easier to detect trends instead of tracking memory for single messages.
Check if there is data waiting to be destroyed by the DDS.
- This can not be deduced from the mmstat output. The ‘reusable’ column in the default output of mmstat is a measure for the amount of memory that was freed and is available again for reuse (with some restrictions).
In order to track memory issues mmstat can be very handy, particularly when using filter expressions . If the suspect scenario is repeated and an always increasing number of objects are observed, that can be an indication of a memory leak. Often however, mmstat can also reveal that the ‘leak’ is due to a programming error (e.g. the history of persistent instances is too large, a reader is only reading and not taking data, entities are not freed).
Application type output
When mmstat is used with the -t or -T option it will report for each type the number of objects allocated and the related amount of consumed memory.
The types that are reported are partly internal administration types and partly user associated types, the latter is of most interest to users to understand memory consumption induced by their
application.
The following types are associated to user data:
v_message<user-type>: this is the data that is send on the wire. This type extends the <user-type> with a header containing publisher information and inline publication qos values that is required during the dissemination of the data. The message is an indivisible whole that as such is not only distributed but also stored as such in the DataReader cashes and Transient/Persistent stores.
v_groupSample<user-type> : this is a holder for transient or persistent v_message<user-type>. This holder is used by the durability service to maintain messages and some associated information like arrival time and liveliness state of the publisher. The number of v_groupSamples basically reflects the number of messages for this <user-type> currently available in the durability service’s transient store.
v_groupInstance< v_groupSample<user-type>> : this is a durability service instance holding transient or persistent v_groupSamples of <user-type>. The number of v_groupInstances depends on the Topic keys and the number of alive key values published by the application writers. The number of v_groupInstances basically reflects the number of instances for this <user-type> available in the durability service’s transient store.
v_indexSample<user-type> : this is a DataReader holder for v_message<user-type>. This holder is used by DataReaders to maintain messages and some associated information like arrival time, liveliness state of the publisher and read state of the Subscriber. The number of v_indexSamples basically reflects the total number of messages for this <user-type> currently available in all the DataReaders local data caches on this node.
v_indexKeyInstance< v_indexSample<user-type>, <user-key-list>> : this is a DataReader instance holding v_indexSamples of <user-type>. The number of v_indexKeyInstances depends on the Subscribers keys and the values published by the application writers. Note that the Subscribers keys may differ from the Topic keys and for that are specified in the name. The number of v_indexKeyInstances basically reflects the number of instances for this <user-type> currently available in all the DataReaders local data caches on this node.
v_writerSample<user-type> : this is a DataWriter holder for v_message<user-type>. This holder is used by DataWriters to maintain messages that must remain available for DataReaders either because of being TransientLocal data or being temporary rejected by the system due to resource shortage. For non-TransientLocal data the expected number of v_writerSamples is therefore zero. The number of v_writerSamples basically reflects the total number of messages for this <user-type> currently available in all the DataWriters local data caches on this node.
v_writerInstance< v_writerSample<user-type>> : this is a DataWriter instance holding v_writerSamples of <user-type>. The number of v_writerInstances depends on the number of alive key values published by the application writers. The number of v_writerInstances basically reflects the number of instances for this <user-type> currently available in all the DataWriters local data caches on this node