Reliability and History are complementary in the sense that History works independent from Reliability and serves a different purpose.
Reliability is to assure that data is reliably delivered from a writer to a reader. The spec states that ‘in steady state’ RELIABILITY means that all samples in a writer’s (history) cache will eventually be delivered to a reader’s (history)cache.
Lets forget about writer-history for a moment and concentrate on the history-QoS on the reader: the ‘history’ feature of DDS allows for the middleware to maintain for reach instance (unique key-value) a configurable ‘depth’ of historical samples to be maintained in the (reader-)history. Furthermore there’s a related QoS policy that determines how the system should behave when the history is ‘full’. This behavior can either be KEEP_LAST (so push out oldest samples to make room for new samples) or KEEP_ALL (retain all samples and eventually slow-down a writer if a reader can’t keep up).
If, for example, the default reader history is KEEP_LAST with a depth of 1 this means that whenever data arrives that is not ‘consumed’ (either ‘read’ or ‘taken’), the new data will push-out the old sample. So from an application (reader-)perspective, it looks like a sample is ‘lost‘ which can be confused with samples not being delivered reliably, but as you now understand, these are independent policies. The sample wasn’t LOST is was just pushed out of the history since you didn’t ‘consume’ it timely.
Adding a history-depth at the reader creates a decoupling between writer and reader history. Note that using a KEEP_ALL history policy at the reader needs to be utilized ‘with caution’ as that can induce a system-wide impact in case of a slow-reader (and its essential to use the take() method to ‘clean’ the history) as when hitting the RESOURCE_LIMITS of the reader-cache, flow-control all the way up to remote writers could kick in, and thus loosing the so-valued time-decoupling between endpoints in DDS.
A similar mechanism is available on the writer-side yet there it typically ‘kicks in’ when you write faster than the network can handle, in which case you will start pushing out older-samples from the writer-history in favor of newer samples (as the default is also KEEP_LAST with depth=1 for the writer-history). For a writer its a much more natural ‘pattern’ to specify a KEEP_ALL behavior as that keeps the writer’s speed ‘in-sync’ with that of the network. With a writer history KEEP_LAST setting you’re basically ‘downsampling’ the data-rate and i.c.w. RELIABLE communication are reliably sending ‘the newest/last samples’, a pattern that’s also known as ‘last-value reliability’.
In summary
- when a writer exploits a KEEP_LAST history and its writing faster than the system/network can handle, data will be ‘downsampled’
- when a reader exploits a KEEP_LAST history and is reading slower than the data arrives, data will be ‘downsampled’
- note that the above does not constitute ‘message-loss’, it is just how its configured to behave
Old data will never ‘overwrite’ new(er) data
- so this ‘downsampling’ affects old(er) data
- the specification states that in steady state (no new data being produced), its the newest data that will be delivered (reliably or best-effort)