This article contains some user submitted OpenSplice durability service questions that may prove useful to others.
See the OpenSplice documentation for more information.
Using persistent durability, the Durability Service does not start correctly.
Type : WARNING
Context : DurabilityService
File : ../../code/d_groupLocalListener.c Line : 727 Code : 0 Description : Persistency not enabled!
Node : ACER
Process : 3240
Thread : 2592
Timestamp : 1161512131.565000000 (Sun Oct 22 11:15:31 200)
To enable persistency, you need to specify a ‘StoreDirectory’ where the durability service can store the persistent data
The configuration looks like this:
<Persistent>
<StoreDirectory>C:\mystore</StoreDirectory>
<StoreMode>XML</StoreMode>
</Persistent>
This has to be inside the ‘DurabilityService’ configuration section of the configuration file.
How can I change the persistence type used by the Durability Service?
OpenSplice DDS will make sure that data is delivered to all interested subscribers available at the time the data is published. It does this using communication paths implicitly created by the middleware based on the interest of applications that participate in the domain. However, subscribers that are created after the data has been published (called late-joiners) may also be interested in the data that was published before they were created (called historical data). To facilitate this use case DDS provides a Durability Service to manage historical data. You need to use a Quality of Service (Durability QoS Policy) which describes how the published data needs to be maintained by the DDS middleware.
The Durability QoS Policy has four options:
- VOLATILE – Data does not need to be maintained for late-joiners (default).
- TRANSIENT_LOCAL – Data needs to be maintained for as long as the DataWriter is active.
- TRANSIENT – Data needs to be maintained for as long as the middleware is running on at least one of the nodes.
- PERSISTENT – Data needs to outlive system downtime. This implies that it must be kept somewhere on permanent storage in order to be able to make it available again for subscribers after the middleware is restarted.
In order to ensure that the data is available to OpenSplice DDS after the service is stopped a persistent store is used. Prior to OpenSplice DDS 6.3.1 there were two forms of persistent store. These were XML which was used on all platforms and on Linux a Memory Mapped File could also be used. Above OpenSplice v6.3 the Memory Mapped File is deprecated and support for two new persistence stores was added. These are Key-Value stores, SQLite and LevelDB. SQLite is supported on windows, Linux and Solaris, Level DB is only supported on Linux. The default persistent storage is XML files.
The type of persistent storage being used is set in the OpenSplice DDS configuration file, ospl.xml. To change the storage type you need to set the configuration up correctly. There are a number of different configuration parameters you need to set and you can find more information about each of these in the OpenSplice DDS Deployment Guide. The easiest way to modify the configuration file and see available options is using the OpenSplice Configurator.
To configure the persistent store the OpenSplice/DurabilityService/Persistent option must be enabled. In the configurator go to DurabilityService [name=durability] section. Right click on Persistent under the Durability Service option and choose add Persistent. You then need to add the element StoreMode which specifies the type of persistent storage that is used by OpenSplice DDS. Right click on persistent and choose Add then StoreMode. When this has been added you can choose the type of storage you want to use.
There are three options:
- XML – The service stores persistent data in XML files
- KV – The service stores persistent data in a key-value store.
- MMF – Deprecated – the service stores persistent data in a Memory Mapped File that exactly represents the memory that is being used by the persistent store.
If you want to use SQLite or LevelDB choose KV. Once you have done this, right click on persistent and add KeyValueStore. You will see the KeyValueStore appear under the Persistent option, if you click on it you will see it has Name and Value. You can change the value from SQLite to LevelDB by clicking on it.
Note: We recommend that you use the KV store as it is the most robust.
Why is data being deleted from TRANSIENT or PERSISTENT durability stores on DataWriter deletion?
By default if a DataWriter is deleted by an application without first unregistering from a Topic instance, samples sent by that instance will be marked for deletion. The QoS property autodispose_unregistered_instances controls this behaviour.
The autodispose_unregistered_instances flag controls what happens when an instance gets unregistered by the DDS_DataWriter:
- If the DDS_DataWriter unregisters the instance explicitly using either SPACE_FooDataWriter_unregister_instance or SPACE_FooDataWriter_unregister_instance_w_timestamp, then the autodispose_unregistered_instances flag is currently ignored and the instance is never disposed automatically.
- If the DDS_DataWriter unregisters its instances implicitly because it is deleted, or if a DDS_DataReader detects a loss of liveliness of a connected DDS_DataWriter, or if autounregister_instance_delay expires, then the auto_dispose_unregistered_instances flag determines whether the concerned instances are automatically disposed (TRUE) or not (FALSE).
For DDS_DataWriters associated with TRANSIENT and PERSISTENT topics setting the autodispose_unregister_instances attribute to TRUE would mean that all instances that are not explicitly unregistered by the application will by default be removed from the Transient and Persistent stores when the DataWriter is deleted, when a loss of liveliness is detected, or when the autounregister_instance_delay expires.
Setting this property to false means your application must manage the instances associated with a Topic manually, i.e. they must be manually disposed when the application is finished writing, failing to do so can lead to memory leaks.
Can persistent data be stored on a remote machine?
It is not possible to configure durability to store persistent data on a remote machine. In order to store data on that remote machine, you need to start OpenSplice on that machine as well and configure the durability service there to store the data.
How does History in the Durability Service interact with the Reader and Writer History QoS?
Durability is about preserving non-volatile data potentially (for TRANSIENT/PERSISTENT) outside the scope/lifecycle of the producers/consumers (i.e. before and after DataWriter/DataReader creation) This means it includes configuration that cannot be related to QoS’s that apply to those (potentially non-existing) writers and readers so, therefore, its configured at the topic-level (under the DURABILITY_SERVICE QoS-attributes). And as topic-definitions are system-wide, those durability-settings are also system-wide.
Then there is the reader and writer history which define purely-local behavior. In the case of writer-history this is used to store samples whilst alive so that readers who more may slower reading than the data arrives will still get the data. In the case of reader-history this specifies how many historical-samples to maintain in that reader, something typically used to prevent samples ‘lost’ (pushed out of that FIFO queue) in “bursty” environments. And that ‘depth’ is fully controllable when creating multiple readers.
How does OpenSplice detect a disconnection has occurred in order to merge data from several nodes?
OpenSplice DDS uses heartbeats to detect whether or not a node is still connected. The Durability Service is responsible detecting disconnections and merging the data on reconnect. In order to do this it needs to detect that a disconnection has occurred. By default it takes 10 seconds for the Durability Service to detect a disconnection. A disconnection that is shorter than this will not be detected.