Introduction
These notes document the current state of release of the OpenSplice DDSI2 Service. The DDSI2 service is an implementation of the OMG DDSI 2.1 specification for OpenSplice.
There is a solid body of evidence that there is real interoperability between OpenSplice and other vendors’ implementations, and in particular with RTI DDS. Nevertheless, there are still some areas that have seen minimal interoperability testing at best. We kindly invite anyone running into interoperability issues to contact us via our support channels.
Those interested in testing interoperability by running the same applications used at the “OMG Interoperability Demonstrations” can download the full package here.
Limitations
Please note that this section may not be exhaustive.
- For an overview of QoS settings, see QoS compliancy below.
- QoS changes are not supported.
- Limited influence on congestion-control behaviour.
- If DDSI2 is operated in its default mode where each participant has its own UDP/IP port number, the maximum number of participants on a node serviced by an instance of the DDSI2 service is limited to approximately 60, exceeding this limit will cause the DDSI2 service to abort. It appears this mode is only required for interoperability with TwinOaks CoreDX DDS. There is never a limit on the number of remote participants.
- No support for inlining QoS settings yet. DataReaders requesting inlined QoS will be ignored.
- Running DDSI2 in parallel to the native networking may impact the performance of the native networking even when DDSI2 is not actually involved in the transmission of data, as DDSI2 still performs some processing on the data.
- No more than 32 key fields, and the concatenated key fields may not require more than 32 bytes of storage, where strings count for 4 bytes.
- When multicast is enabled and a participant is discovered that advertises a multicast address, it is assumed to be reachable via that multicast address. If it is not, then it must currently be operated in multicast-disabled mode with all possible peer nodes listed explicitly, as this will restrict the set of addresses advertised by the participant to its unicast address.
QoS compliancy
The following table lists the level of support for each QoS. In some cases, compliancy is better when the DDSI2 service is used to connect two OpenSplice nodes than when it used to connect an OpenSplice node with another vendor’s DDS implementation. The OpenSplice kernel performs many aspects of DDS in ways independent of the underlying wire protocol, but interoperating with another vendor’s DDS implementation requires the DDSI2 service to fully implement the mapping prescribed by the DDSI 2.1 specification. This work has not been completed yet.
QoS | OpenSplice | Other vendor |
---|---|---|
USER_DATA | Compliant | Compliant |
TOPIC_DATA | Compliant | Compliant |
GROUP_DATA | Compliant | Compliant |
DURABILITY | Compliant, but see Issues rooted in the standard below | |
DURABILITY_SERVICE | Compliant | Compliant |
PRESENTATION | Compliant | Compliant, access scope GROUP extensions not yet defined in the standard. |
DEADLINE | Compliant | Compliant |
LATENCY_BUDGET | Compliant | Compliant |
OWNERSHIP | Compliant | Shared ownership: fully supported; exclusive ownership: partially supported, a higher-strength writer can take ownership but failover to a lower-strength one may not occur. |
OWNERSHIP_STRENGTH | Compliant | Compliant |
LIVELINESS | Compliant | All entities treated as if liveliness is AUTOMATIC. For OpenSplice participants, the lease duration is fixed at 11s, for readers and writers at infinity. Lease durations of remote participants, readers and writers are honoured correctly. |
TIME_BASED_FILTER | Compliant, except that all there is no filtering to limit the rate with which samples are delivered to the reader. | |
PARTITION | Compliant | Compliant |
RELIABILITY | Compliant | Compliant |
TRANSPORT_PRIORITY | Compliant | Compliant |
LIFESPAN | Compliant | Compliant |
DESTINATION_ORDER | Compliant | Compliant |
HISTORY | Compliant, except that the writer history for a DataWriter of transient-local durability is always maintained as if the history setting is KEEP_LAST with depth 1 | |
RESOURCE_LIMITS | Compliant | Compliant |
ENTITY_FACTORY | Compliant | Compliant |
WRITER_DATA_LIFECYCLE | Compliant | Compliant |
READER_DATA_LIFECYCLE | Compliant | Compliant |
Issues rooted in the standard
- The specification only deals with volatile and transient-local data, and leaves the behaviour for transient and persistent data undefined. Many OpenSplice applications follow the recommendation to use transient data and not transient-local data, and indeed, OpenSplice implements transient-local as transient. This evidently creates a complex situation for a DDSI implementation.The following two tables aim to provide an overview of the expected behaviour when both sides are using OpenSplice, and when only one side is.
OpenSplice writer:
Writer QoS | Reader QoS | Behaviour |
---|---|---|
all | volatile | as expected |
transient-local | transient-local | DDSI2 will internally manage a writer history cache containing the historical data for a history setting of KEEP_LAST with depth 1 (note that this is the default for writers). The data will be advertised in accordance with the specification and new readers receive the old data upon request. An OpenSplice reader will also receive the data maintained by the OpenSplice durability service. |
transient | transient-local | A remote reader on OpenSplice will receive transient data from the OpenSplice durability service, but a remote reader on another vendor’s implementation will not. |
transient | same as previous case | |
persistent | all | deviations from the expected behaviour are the same as for transient |
Non-OpenSplice writer, OpenSplice reader:
Writer QoS | Reader QoS | Behaviour |
---|---|---|
all | volatile | as expected |
transient-local | transient-local | The reader will request historical data from the writer, and will in addition receive whatever data is stored by the OpenSplice durability service. |
transient | transient-local | The reader may or may not receive transient data from the remote system, depending on the remote implementation. It will receive data from the OpenSplice durability service. The durability service will commence storing data when the first reader or writer for that topic/partition combination is created by any OpenSplice participant (i.e., it is immaterial on which node). |
transient | same as previous case | |
persistent | all | deviations from the expected behaviour are the same as for transient |
Once the specification is extended to cover transient data, the situation will become much more straightforward. In the meantime it may be possible to make more configurations work as expected. The specification process is currently actively exploring the alternatives.
- No verification of topic consistency between OpenSplice and other vendors’ implementations. The specification leaves this undefined. OpenSplice-to-OpenSplice the kernel will detect inconsistencies.
- DDSI2 uses a shared set of discovered participants, readers and writers on a single node. Consequently, a new OpenSplice participant learns of the readers and writers of remote participants and starts communicating with them, even before the remote participant has had a chance of discovering this new participant. This behaviour is believed to be allowed by the specification, but one other implementation has at one point been observed to malfunction because of this.
- The specification of the format of a KeyHash is ambiguous, in that one can argue whether or not padding should be used within a KeyHash to align the fields to their natural boundaries. The DDSI2 service currently does not insert padding, as this has the benefit of allowing more complex keys to be packed into the fixed-length key hash. It may be that this is not the intended interpretation.
Notes on DDSI2E support for SSM
- Configurable mapping of partition/topic combinations to SSM, using the network partition mechanism. Until now, the writers would simply publish to the addresses advertised by the readers, and the network partitions could be used to advertise an alternative (ASM) multicast address. In this prototype, the writers will look at the address set in the network partition, and if the set happens to include SSM addresses, it will include an arbitrarily selected SSM address in its discovery data. The presence of an SSM address in the writer’s discovery data is taken by the readers as an indication that the writer is willing to serve data using SSM at that address. Analogously, the readers will also look at the address set in the network partition, and again, if it includes an SSM address, it will advertise that it favours SSM over ASM. (Note that it favours SSM, not that it requires it.) If an SSM-favouring reader discovers an SSM-capable writer, DDSI2E on the reader side will join the writer’s advertised SSM group. An SSM-capable writer will preferentially use unicast, but if it decides to use multicasting, it will include its SSM address if there is an SSM-favouring reader. If there is a mixture of SSM-favouring and non-SSM-favouring readers, and the SSM-favouring readers have also joined one or more of the (ASM) multicast groups, they will currently receive the data over both channels. The mappings need not be the same on all machines. Note that SSM addresses are in the 232.x.y.z range for IPv4 and ff3x::4000:1 to ff3x::ffff:ffff range for IPv6, according to RFC 4607 (but really ff3x::8000:0 onwards, as the lower half of the range is reserved for allocation by IANA). This is also the criterion used by DDSI2E to determine whether or not to use SSM.
- Possibility of running without ASM but with SSM There are really three categories of data in DDSI:
- application data
- participant discovery (SPDP) data
- endpoint discovery (SEDP) data
Application data is primarily covered by point 1 above. The SPDP data has to use ASM or unicast — it is used to discover who is out there, so SSM is an obvious impossibility. This has not changed. SEDP (as well as any application data not mapped to a network partition) relies on the default addresses advertised in the SPDP messages. To allow use of SSM for SEDP, this version adds the option of independently setting the SPDP address and those advertised default addresses, with the latter defaulting to the SPDP address. If this default address is an SSM address, any reader or writer relying on the default address will use SSM, as if it were mapped to a networking partition specifying the use of SSM, as in point 1. Note that there is no need for the default address to be different from any of the other addresses. Now that DDSI2E supports two types of multicast, a simple boolean switch “AllowMulticasting” is no longer sufficient. DDSI2E now interprets this as a comma-separated list of the keywords: “spdp”, “asm” and “ssm”. The first enables use of ASM for SPDP only, the others simply enable full use of ASM and SSM, respectively. The old “true” enables all types of multicasting, and “false” disables it altogether.
- Example – The following basic configuration will rely on ASM for SPDP and on SSM using group address 232.3.1.3 for all SEDP and user data not in DCPS partition A; and on SSM using group address 232.3.1.4 for all user data in DCPS partition A.
<DDSI2EService name="ddsi2e"> <Discovery> <DefaultMulticastAddress>232.3.1.3/> </Discovery> <!-- this ensures that readers for data in partitions A and B will favour SSM, and that writers for data in partitions A and B will provide data via SSM, via addresses 232.3.1.4 and 232.3.1.5, respectively --> <Partitioning> <NetworkPartitions> <NetworkPartition name="ssmA" address="232.3.1.4"/> </NetworkPartitions> <PartitionMappings> <PartitionMapping DCPSPartitionTopic="A.*" NetworkPartition="ssmA" /> </PartitionMappings> </Partitioning> </DDSI2EService>