ZettaScale Knowledge Base ZettaScale Knowledge Base

  • Home
  • DDS Products
    • DDS Overview and Concepts
    • OpenSplice DDS
      • OpenSplice DDS
        • OpenSplice FAQ
        • Why OpenSplice DDS?
        • Installation
          • OpenSplice Licensing FAQ
        • Best Practice and Possible Errors
        • API and IDL
        • Configuration
        • Networking
          • DDSI
          • RT Networking
        • Durability Service
        • DDS Security
        • Logging
        • Databases and DBMS
        • Release Notes
      • OpenSplice Tools
        • Overview
        • OpenSplice Launcher
        • OpenSplice Tuner
        • OpenSplice Tester
        • Record and Replay Manager
        • MMStat
    • Cyclone DDS
  • Zenoh
  • Contact Support
Home / DDS, OpenSplice DDS, Best Practice and Possible Errors / Creating and deleting a writer quickly causes sample loss

Creating and deleting a writer quickly causes sample loss

If you are creating and deleting a writer quickly it may cause sample loss. This explanation shows why creating and deleting a writer in short period of time may cause a loss of samples.
 

What happens when an application creates an entity

When an application creates an entity, e.g. a DataWriter, it publishes samples of built-in topics describing it. Applications can then subscribe to these built-in topics. They can use these built-in topics to reconstruct the topology of the system. For example the Tester tool does this. It uses the built-in topics to present an overview of all the DDS nodes, topics and entities in the network.
 
These built-in topics are also subscribed to by the DDS services, eg DDSI. This allows them to get notifications of changes to the set of entities in the system. The DDSI service uses this information to map all the DDS operations in the node to the DDSI protocol.
 
The DDSI service has two independent inputs from the local OpenSplice kernel. These are:
 
  • the built-in topics it subscribes to. These allow it to monitor the creation and deletion of entities in the local node
  • the queue of samples written by applications that get forwarded onto the network.
 
This means the DDSI service may:
  • see a sample from a new writer before it gets notification of the creation of the writer.
  • get notification of writer deletion before it sees the final sample(s) written by that writer.
 
The DDSI service takes account of these cases. If it receives a sample from a writer it doesn’t know about yet it reads the corresponding built-in topics. It then instantiates the writer. To deal with the second case a special message is sent though the network queue. This occurs when it detects writer deletion. It only deletes the writer from the internal administration when it receives this special message.

 

What happens when the writer is created and deleted in a short period of time

When a writer is created, a sample is written and the writer is deleted in a very short period of time it is possible the sample could be lost. This is because the writer has been deleted before the first sample written by the writer is taken from the network queue. This means the built-in topic subscriber of the DDSI service does not get triggered into creating the writer before it’s deletion. By the time the DDSI service sees the sample and tries to locate the writer there is no longer a description of the writer present. The DDSI service can no longer process the sample and drops it.

As well as incurring the risk of sample loss this pattern also has a run-time overhead. When creating and deleting the writer the memory needs to be allocated and freed. It also triggers network-wide discovery. This means that one sample actually becomes four samples which relate to participant discovery.  There are two samples for the publisher, 2 samples for the subscriber along with the one sample that is published. You also get the  corresponding acknowledgements and reliable protocol handshake messages between the writer and its matching readers.

What you can do instead

Typically an application will create readers and writers when it starts up and these will remain in place while the application is running.  DDS is a very good model for interacting micro services. Each micro service begins by creating the readers and writers it needs, and then enters a loop in which it only reads and writes. Indeed, even if the application isn’t so much built as a collection of micro services, the DDS entities used by an application are typically pretty stable and long-lived. That obviously avoids messages being dropped as described above.

In short: if you keep the writer in process instead of creating and deleting it for each sample, the sample loss problem will be gone and the system will be more efficient.

Related

OpenSplicesamplesdata writers DDSOpenSplice DDSBest Practice and Possible Errors
twitter logo linkedin logo
News and Events
Copyright © 2022 ZettaScale Technology Ltd. All Rights Reserved