ZettaScale Knowledge Base ZettaScale Knowledge Base

  • Home
  • DDS Products
    • DDS Overview and Concepts
    • OpenSplice DDS
      • OpenSplice DDS
        • OpenSplice FAQ
        • Why OpenSplice DDS?
        • Installation
          • OpenSplice Licensing FAQ
        • Best Practice and Possible Errors
        • API and IDL
        • Configuration
        • Networking
          • DDSI
          • RT Networking
        • Durability Service
        • DDS Security
        • Logging
        • Databases and DBMS
        • Release Notes
      • OpenSplice Tools
        • Overview
        • OpenSplice Launcher
        • OpenSplice Tuner
        • OpenSplice Tester
        • Record and Replay Manager
        • MMStat
    • Cyclone DDS
  • Zenoh
  • Contact Support
Home / Fixed Bugs and Changes in OpenSplice v6.9.x

Fixed Bugs and Changes in OpenSplice v6.9.x

This page lists all the fixed bugs and changes in the OpenSplice 6.9.x releases.

Regular releases of OpenSplice  contain fixed bugs, changes to supported platforms and new features are made available on a regular basis.

There are two types of release, major releases and minor releases.  Upgrading OpenSplice contains more information about the differences between these releases and the impact of upgrading.  We advise customers to move to the most recent release in order to take advantage of these changes.  This page details all the fixed bugs and changes between different OpenSplice releases.  There is also a page which details the new features that are the different OpenSplice releases.

There are two different types of changes. Bug fixes and changes that do not affect the API and bug fixes and changes that may affect the API. These are documented in separate tables.

Fixed Bugs and Changes in OpenSplice V6.9.x

OpenSplice v6.9.2p14

Report ID.Description
OSPL-13252When a node requests groups from another node, the answer is send to all nodes instead of the requestor only
As part of aligning data sets between nodes, information about the available groups (i.e., partition/topic combinations) must be exchanged between the nodes. This is done by having one node sent a request for groups to another node, and the other node sends back the groups that is has. Instead of addressing the groups to the requesting node, the other nodes sends back the groups to everybody. This may cause unnecessary processing by nodes that have not requested the groups.
Solution: When a node sends a request to another node, the answer is directed to the requesting node only
OSPL-13197 / 00019796 The networking synchronization option sometimes fails to synchronize to nodes after a asymmetrical reconnect.
A synchronization option was added to the networking service configuration to provide better recovery after an asymmetrical reconnect. However it could occur that the synchronization failed because an old ack message could purge the resend queue incorrectly.
Solution: When the synchronization option is enabled the networking service will not purge the resend queue for a particular node when not having received an acknowledge of the synchronization message.
OSPL-13101Premature alignment that can potentially lead to incomplete alignment
When nodes states have temporarily diverged and they get reconnected again, their states must be merged according to the configured merge policy. Typically, this leads to requests for data from one node (say A) to the other (say B), and the other nodes sends the data. Before these requests for data are send, node A node first retrieves which partition topic combinations (so called groups) exist on B, so that it can ask for data for these groups. To do that the requesting node A first sends a request for groups to B, and B replies by sending all its groups in newGroup messages, one by one. Each message contains an indicator that indicates how many groups there are, so A knows how many groups to expect before requests for data can be send out for each group. When node B creates a new group, this also leads to the publication of a newGroup message which is received by all durability services. This message has an indicator that the number of expected groups is 0. If a new group is created while groups are being exchanged, then it can happen that the number of expected groups is reset to 0, causing the durability service on A to prematurely think that all groups have been received, and so node A starts requesting data from B without having acquired all groups. Because A sends requests for data for all groups that it currently knows, node A will only acquire data for a subset of the group that B knows.
Solution: When a new group is created the number of expected groups are not reset to 0 any more.
OSPL-12768 / 00019538Alignment may stall when a new local group is created while a merge request to acquire the data via a merge for the same group is ongoing.
When a durability service learns about a partition/topic combination, it may have to acquire data for this group by sending a request for samples to its master. When at the same time a merge conflict with the fellow is being handled, this may also lead to the sending of a request for samples to the same fellow. Both paths are for performance reasons decoupled, and so there is a window of opportunity that may lead to two identical requests for the same data to the same fellow. Since the requests are identical only one of them is answered. The remaining one never gets answered, with may potentially stalls conflict resolution.
Solution: The requests are distinguished so that they are never identical
Showing 1 to 4 of 4 entries

OpenSplice v6.9.2p13

Report ID.Description
OSPL-13074 / 00019983Durability may crash when handing an asymmetrical disconnect.
When an asymmetrical disconnect is detected by durability it will clear the namespace administration associated with the fellow that is considered disconnected. However there may still be a merge action in progress that is associated with that fellow and which tries to access the deleted namespace object.
Solution: When using the namespace object it's reference count is incremented to ensure it can be accessed safely.
OSPL-13112 / 00019983The RT Networking may crash when a short disconnect occurs.
When networking receives messages it will first put these messages in a queue before processing them further. For a reliable channel the message read from this queue are put in the out-of-order administration which is related to the sending node. When networking considers a sending node as not responding it will clear the administration related to that sending node. When reconnect is enabled and networking receives again messages from that node it will resume reliable communication and considers that node alive again. Reliable communication is resumed from the first message it receives again from that node. However when the disconnect is very short there may still be old message from that node present in the internal queue which causes problems when they are put into the out-of-order administration related to that sending node.
Solution: The out-of-order administration rejects message that are considered old.
Showing 1 to 2 of 2 entries

OpenSplice v6.9.2p12

Report ID.Description
OSPL-12967 / 00019799Networking is not able to resolve asymmetrical disconnect correctly.
A small memory leak of an internal network administration buffer was discovered in the originial fix in the V6.9.2p11.
Solution: The buffer is now correctly freed.
Showing 1 to 1 of 1 entry

OpenSplice v6.9.2p11

Report ID.Description
OSPL-12253 / 00019074 Networking service trace logs of different threads are interleaved
The trace reports of the (secure) RTNetworking service on a busy system can be interleaved, i.e. two threads writing partial traces to the output file that end up on the same line. This decreases readability of the log and makes automated processing more difficult.
Solution: The issue was resolved by no longer writing partial traces to the output file. There is a possibility of order reversal of reports by different threads though that should only be cosmetic, reports of the same thread will still be in order.
OSPL-12964 / 00019796 Durability does not reach the operational state
When a durability service starts it is checked if all namespaces have a confirmed master before reaching the operational state. This check contained an error which could cause that a durability service did not reach the operational state.
Solution: The condition used in the check has been fixed.
OSPL-12965 / 00019797First message send after detecting remote node may not arrive at that node.
The durability service is dependent on the reliable delivery of message provided by the networking service. When the durability services detects a new fellow (node) it send a capability message to the new detected fellow. However under very high load circumstances it may occur that this first message may not be delivered by the networking service. The networking service will provide reliable communication to a remote node on a particular networking partition when it has received the first acknowledge from the remote node on that partition. However when the first number of message send after detecting the remote node do not arrive at that node for at least a duration longer than recovery_factor times the resolution interval it may occur that this first message is not delivered.
Solution: The configuration option SyncMessageExchange is added to enable sending of a synchronization message to a newly detected node. By default this option is disabled because older versions do not provide this option. When enable a synchronization message is sent repeatedly until a corresponding acknowledge is received or the configured timeout expires. When this option is enabled the receiving node will wait with the delivering of the first received message until a synchronization message is received or the configured timeout expires.
OSPL-12966 / 00019798 Add the option to the networking service to include the send backlog in throttling calculation.
The calculation of the amount of throttling is using the number of receive buffers that are in use the receiving nodes. A receiving node reports the number of used buffer to the sender in the acknowledge messages. However when there is a high load on the system message may be dropped in the network or in the socket receive buffers of the receiving node. At the sending node an increase of the number of unacked messages may indicate that there is some network congestion occurring. By including the number of unacked messages in the calculation of the throttling factor a sender may react better to network congestion.
Solution: The ThrottleUnackedThreshold configuration option is added. When set to a value higher than zero it will enable the number of unacked message to be included in the calculation of the throttling factor. When this option is enabled then the number of unacked bytes that exceeds the ThrottleUnackedThreshold are used in the calculation of the throttling factor.
OSPL-12967 / 00019799Networking is not able to resolve asymmetrical disconnect correctly.
The reliable channel of the networking service will consider a remote node as died when it did not receive an acknowledge in time which is controlled by the Resolution, RecoveryFactor and MaxRetry configuration parameters. When a reconnect occurs it is possible that the remote node did not notice the disconnect and may still be waiting on a particular message to arrive. However this message may not be present anymore at the sending node. This may cause that at the receiving node the reliable backlog exceeds the threshold or when this occurs for more than one node at the same time that the total number of de-fragmentation buffers exceeds the threshold resulting in a termination of the networking service.
Solution: The configuration option SyncMessageExchange is added to enable sending of at reset message when a remote node reconnects. By default this option is disabled because older versions do not provide this option. When enable the reset message is sent repeatedly until a corresponding acknowledge is received or the configured timeout expires. The reset message contains the sequence number of the first message that is available and the next sequence number to be sent. This allows the receiving node to reset it's reliable administration.
Showing 1 to 5 of 5 entries

OpenSplice v6.9.2p10

Report ID.Description
OSPL-12842 / 00019595 Durability may not notice a temporarily disconnect caused by the reliable backlog threshold being exceeded at networking level.
When a receive channel is overloaded or does not get enough resources then it can cause that ack messages are not send in time. This may result in an asymmetric disconnect at the sender side. This may cause that at the receiver side an expected packet is never received causing the reliable backlog threshold to be exceeded. This may cause a short disconnect and reconnect to occur at the receiver side which may not be noticed by the spliced daemon and the durability service. This may cause that durability does not receive an expected message.
Solution: When the networking receive channel detects that the reliable backlog is exceeded it declares the corresponding sending node dead for some time to prevent that message still available in the receive socket may reconnect the sending node immediately. Further the durability service reacts not only on the disposed state of the heartbeat but also on the no-writer state which indicates that a remote node has died and may become alive shortly thereafter.
OSPL-12852 / 00019601 Improve max-samples threshold warning reports and configurability
The max-samples (and samples per instance) threshold warnings, when triggered through one of the PSMs, would imply a more serious error or in other circumstances would not be reported at all. Also it was not possible to disable the threshold warnings by configuration file.
Solution: The report mechanism was changed so the warnings are consistently reported at the appropriate verbosity. The relevant configuration parameters (//Domain/ResourceLimits) now accept a value of '0' to disable the reports.
Showing 1 to 2 of 2 entries

OpenSplice v6.9.2p9

Report ID.Description
OSPL-127838 / 00019602Non-verbose error reports when key-file cannot be created.
When a shared-memory domain is started an key-file with metadata is created in a temporary directory (i.e. OSPL_TEMP or /tmp). If this directory doesn't exist or filesystem permissions don't allow creation of the file, an error report is created without including the path.
Solution: The error report was extended to include path information.
Showing 1 to 1 of 1 entry

OpenSplice v6.9.2p8

Report ID.Description
OSPL-12799 / 00019578Possible crash in durability service during termination
After the work on OSPL-12648 was finished we discovered that there was another path in the durability termination mechanism that could lead to a crash.
Solution: The order in which threads are terminated was rearranged to ensure this cash cannot occur anymore.
Showing 1 to 1 of 1 entry

OpenSplice v6.9.2p7

Report ID.Description
OSPL-12648 / 00019383Possible crash in durability service during termination
A number of threads in the durability service access shared data. Depending on the context, during termination the cleanup of this data in one thread can cause another thread to crash.
Solution: The order in which threads are terminated was changed to ensure the data is cleaned up after all relevant threads have finished.
OSPL-12660 / 00019363Files created outside the configured OSPL_TEMPDIR location
The shared-memory monitor creates 'osplsock' files which do not adhere to the configured OSPL_TEMPDIR env. variable. Instead the file is always created in /tmp.
Solution: The code responsible for socket file creation is changed to prepend the value of OSPL_TEMPDIR instead of '/tmp'. Note the fallback is still to use '/tmp' in case OSPL_TEMPDIR is unset.
Showing 1 to 2 of 2 entries

OpenSplice v6.9.2p6

Report ID.Description
OSPL-11809 / 00018836Possible crash after detaching domains in a multi-domain application
A second case was discovered in the user-layer code that protects access to the kernel (see previous OSPL-11809 release note). A small window exists that allows a thread to access kernel memory while the domain is already detached.
Solution: The previous fix involved protecting access for threads leaving a protected (kernel) area. Entering the kernel also contained an issue, fixed by storing the relevant data in the user-layer so it can be accessed safely.
OSPL-12342 Crash of the spliced service when using OSPL_LOGPATH.
Possible crash of the spliced service (in shared memory configuration) or application (in single process configuration) when the the user sets the environment variable OSPL_LOGPATH.
Solution: Crash was caused by the spliced which tried to free the memory of the variable returned by the getenv operation. It is not required and not allowed to free the returned variable. The problem is solved by removing the free from the code.
OSPL-12454 Similar conflicts are not always combined, which may potentially lead to slow or failing alignment.
The durability service is responsible for keeping states consistent. Whenever an event happens that requires the durability service to take action (e.g., a disconnect/reconnect) a so-called conflict is generated that needs to be resolved. Under certain circumstances multiple but similar conflicts can be generated. Because these conflicts are similar it is sufficient to only resolve one of them. However, due to a bug is was possible that multiple similar conflicts are generated and resolved sequentially. In particular, it was possible that multiple conflicts are generated with a rate that effectively causes the conflict queue never to become empty. Because the durability service only advertises group completeness when the conflict queue is empty, this could effectively lead to stalling alignment.
Solution: The algorithm to decide when conflicts are similar is changed, so that similar conflicts are now being discarded.
Showing 1 to 3 of 3 entries

OpenSplice v6.9.2p5

Report ID.Description
OSPL-12252 / 00019073Out of memory due to memory leakage
Various memory leaks in shared memory caused out of memory errors when calling API functions
Memory leaks fixed
Showing 1 to 1 of 1 entry

OpenSplice v6.9.2p4

Report ID.Description
OSPL-12211Durability service with legacy master selection not directly slaving to existing master
A durability service with the highest fellow id and an unconfirmed master starts legacy master selection it fails to slave to the existing master until it start majority voting
Solution: Existing master selected sooner
OSPL-12133 / 19018After reconnect durability not aligned when fellow returns within 2*heartbeatPeriod and legacy masterselection algorithm is used
When a fellow connects to a durability service a connect conflict is created. When this happens within twice the heartbeatPeriod after a fellow got disconnect the durability service potentially lost its master. When it lost master the connect conflict is discarded because no master selected, this could lead to inconsistent data states.
Solution: Connect conflict is not discarded when no master selected. The conflict remains until a merge-action is applied
Showing 1 to 2 of 2 entries

OpenSplice v6.9.2p3

Fixed bugs and changes not affecting the API in OpenSplice 6.9.2p3

Report ID.Description
OSPL-11873Possible application crash when using OpenSplice as windows service.
In V6.9.2p2 a fix was made for the issue that when OpenSplice was used as a windows service and when an application was started an access violation could occur. Unfortunatly a part of the fix was not put into the V6.9.2p2. This is now corrected.
The complete solution is now added and the crash should not occur anymore.
Showing 1 to 1 of 1 entry

OpenSplice v6.9.2p2

Fixed bugs and changes not affecting the API OpenSplice 6.9.2p2

Report ID.Description
OSPL-12016Logrotate not working correctly with OpenSplice log files. When using logrotate with the RTNetworking service the rotating works ok but the truncate function does not work properly. The problem is fixed and the RTNetworking service now can be properly truncated.
OSPL-12007When the idl contains a union with an unsigned short as discriminant, then serialization may fail on big endian hosts. If the idl contains a union with an unsigned short discriminant (and only an unsigned short), the constants for the case labels are represented in the OpenSplice metadata as a different type than the type of the discriminant itself. In the process of converting the type into instructions for the serializer VM, the case labels are read from memory as-if their representation is that of the discriminant type (even though the values carry a type indication themselves, and this type indication is correct for the actual value). On a big-endian machine, it therefore reads all case labels as 0. Consequently, any case that differs from 0 will be handled as case 0. This will lead to an invalid serialization. User may experience this as a communication problem. The constants for the case labels are now correctly represented in the OpenSplice metadata.
OSPL-12010Tester, Tuner, and Jython scripting fail to write samples containing sequence of enum When writing a sample with any of the above tools, whose type contains sequence of enumeration type, then any readers will receive the sample with the sequence empty. This is due to an error in the writing side serialization for sequence of enum types. Fixed the serializer in the underlying api to account for sequence of enum types.
OSPL-11965 Possible crash in durability termination when handling an event The durability service could crash during termination when handling an event in the "AdminEventDispatcher" thread during termination as the "conflictResolver" thread was stopped first and the events could be dependent on this threads administration Stop "AdminEventDispatcher" thread before "conflictResolver" thread
OSPL-11873 Possible application crash when using OpenSplice as windows service. When using OpenSplice as a windows service under a different user as the application a possible application crash can occur due to a permission problem. An indication of when this is occurring can be when the following message occurs in the error log: "Failed to allocate a wait handler on process handle, System Error Code: 87" The permission problem is fixed and the user application can now correctly communicate with the service.
OSPL-11829 RnR service property file incompatibility When using the RnR service on windows the property file thats being created has windows line terminating endings (^M) When using this propery file under linux this can cause problems. The problem is fixed and the property file incompatibility is now gone.
OSPL-11809Possible crash in write after detach_all_domains It was possible that a multithreaded application would crash when calling detach_all_domains and a other API call that would access the domain concurrently. I was possible that the other API call accessed the domain while it was detaching which caused the crash could crash when it called a DataWriter_write function after it called detach_all_domains because No longer possible to access the domain while detaching
OSPL-11807No SSL support for ddsi in Community Edition According to the documentation, it should be possible to configure ddsi to use SSL, even in the Community Edition. Also the ddsi code base in the Community Edition contains everything needed to support SSL. The only reason why SSL support is not available in the Community Edition was that the build files were not explicitly including this feature by passing the appropriate macro and including the required SSL libraries. The build files have been modified: if SSL support is available on the build platform, SSL support will be enabled by default on the Community Edition.
OSPL-11778 Merge state incorrectly updated when last partition.topic in namespace did not change while others did OSPL-10113 addressed an issue where merging with an empty set could lead to a state update even when a data set was not changed. It only looked at the last aligned partition.topic in the namespace for changes in stead of all partition.topics part of the namespace and thus something didn't update the namespace state while it had to Look at all partition.topics in the namespace to determine if the namespace state needs to be updated
OSPL-11690A booting node with an aligner=FALSE policy may not reach the complete state in case it has not detected all namespaces from its aligner in time. If a node has a namespace with an aligner=FALSE policy then the node should only reach completeness in case an aligner becomes available. Due to an unlucky timing it is possible that the node detects an aligner but has not yet received all of its namespaces. In that case the node will not request the completeness state of the group (i.e., partition/topic combinations) from its aligner while traversing its initial startup procedure. Because it does not request the groups from the aligner, the node will not known that the aligner has all its groups complete, and therefore never requests data from the aligner. This causes the node to never reach the completeness state. Group information is now always requested from aligners if not done so already.
Showing 1 to 10 of 13 entries

OpenSplice v6.9.2p1

Fixed Bugs and Changes not affecting API in OpenSplice 6.9.2p1

Report ID.Description
OSPL-11708Out of order delivery of invalid message might corrupt instance state In the V6.9.1, a bug was introduced where out-of-order delivery of an invalid message might corrupt the instance state, resulting in undefined behavior. Out-of-order delivery occurs when your Reader is using BY_SOURCE_TIMESTAMP ordering and a message with a source timestamp older that the currently stored message is being delivered. The ordering algorithm has been modified so that this scenario no longer corrupts the instance state.
OSPL-9054Leakage of JNI local refs when using the Java API
When using the Java API and taking a lot of samples at once it could happen that the JNI local refs exceed the JNI local ref limit.
Solution: The leakage is fixed and will not occur anymore.
OSPL-11671Missing initial dynamic network configuration stateWhen using RT Networkservice in combination with dynamic network configuration it could happen that the initial publication of the dynamic network configuration state would not happen.The problem is fixed and the initial dynamic network configuration state is now always published.
OSPL-11718The dcpsPublicationListener is not freed correctly when durability terminates When client durability is enabled, the durability service creates a reader that listens to DCPSPublication messages. When durability is terminated this reader was not cleaned up correctly, which could lead to a memory leak. This problem can only surface when client durability is enabled. The reader is cleaned up correctly.
OSPL-11550Inefficient alignment in case of master conflicts for the same namespace with different fellows When nodes becomes temporarily disconnected, they may choose different masters to align from. When reconnected again this leads to a situation where there are multiple masters in the system. To resolve this situation a master conflict is scheduled that needs to be resolved. Resolving a master conflict leads to choosing a single master again, and in many cases leads to alignment between nodes. Currently, a master conflict is generated per namespace per fellow (i.e., a remote durability service). In case there are many fellows this will lead to many conflicts, and hence many alignments. It is not necessary to generate a conflict per fellow to recover from such situation. Whereas in the past two master conflicts for the same namespace with different fellows led to different conflicts that each needed to be resolved, they now lead to a duplicate conflict that is dropped. This decreases the number of master conflicts in case there are many fellows, and hence may decrease the number of alignment actions that will occur.
OSPL-11531Similar conflicts can be resolved multiple times The durability service monitors the state of fellow durability services to detect any discrepancies between them. If there is a discrepancy, a conflict is generated to resolve this discrepancy. Before a conflict is added to the queue waiting to be resolved, a check is done to see if a similar was not already queued. If so, the newly generated will not be added to the queue of pending conflicts. What is not checked is whether the conflict that is currently being resolved is similar to the one that is generated. Consequently, if a conflict (say conflict 1) is being resolved and another conflict (say conflict 2) is being generated, and conflict 1 and conflict 2 are similar in the sense that they should resolve the same discrepancy, then conflict 2 may still be added to the queue of pending conflicts. This may cause that the same conflict is being resolved multiple times. Before a conflict is added to the queue it is checked whether the current conflict is similar.
OSPL-11606Sample validation by DataWriter may cause compilation issues when enabled By default the ISOCPP2 DataWriter does not validate sample input against for example the bounds specified in IDL. In order to enable bounds checking, the macro OSPL_BOUNDS_CHECKING should be set when compiling the idlpp output. However, when setting this macro it might turn out that some of the generated code might not compile. Bounds validation has been corrected, and is now also enabled by default. This is because the importance of bounds checking far outweighs the tiny bit of extra overhead introduced by it.
Showing 1 to 7 of 7 entries

OpenSplice v6.9.2

Fixed Bugs and Changes not affecting API in OpenSplice 6.9.2

Report ID.Description
OSPL-11753 Dispose all data not processed by remote nodes
When doing a dispose_all_data call on a topic, local instances are disposed and a DCPSCandMCommand sample is published on the network, for other nodes to dispose instances of that topic as well. The sample is processed by the builtin-subscriber of spliced, which in turn disposes relevant instances. A bug in spliced, related to setting the event-mask, causes it to not wake-up when new data is available, and therefore no instances are disposed on remote nodes.
Solution: The bug was fixed by properly setting the data-available event-mask.
OSPL-11500 The mode to store data in human-readable format in the KV store was undocumented, and trying to use it would lead to an error.
The KV store is an efficient implementation of a persistent store that by default stores data as blobs that are difficult to interpret by humans. The KV store can be configured to store the data in human-readable format, but unfortunately this mode was not documented. Because only valid configurations can be used to start OpenSplice and this option was not documented, it was considered invalid. Therefore, it was not possible for a user to configure the KV store in human-readable format.
Solution: The option to configure the KV store has now been documented and does not lead to an error any more. See //OpenSplice/DurabilityService/Persistent/KeyValueStore[@type] for more information.
OSPL-11398 idlpp generated code for C# has several issues with respect to handling IDL collections and unions.
The idlpp backend for C# generates code that has several issues when dealing with IDL collections and unions. Examples of things that would fail to compile or fail to work were:

Sequences of arrays and arrays of sequences.
Sequences of boolean and sequences of char in unions.
Sequence of unions.
Inner types (structs/unions inside of other structs/unions).
Sequence of enumerations in unions.

Solution: The C# backend of idlpp has been modified to generate code that does solve these cases correctly.
OSPL-11533 C99: dds_instance_get_key seg faults with generic copy-out method
When using the C99 dds_instance_get_key function on a generic data writer, a seg fault happens in the generic copy out function due to a null pointer
Solution: Added the code missing to copy data to the DDS_DataWriter_get_key_value so that it now works when using generic copy routines. DDS_DataWriter_get_key_value is called by the C99 dds_instance_get_key, causing the seg fault in the copy our routine
OSPL-11774 Added tag to the native NetworkingService configuration.
Before the NetworkingService only supported tracing categories where for each category a trace level 0..6 could be specified. All other services provided a tracing option where a trace level 0..6 could be specified.
Solution: The new NetworkService option is an additional way to set tracing levels besides the existing tracing category configuration option. The category options specify the trace level per category whereas the new verbosity option set the trace level for all categories.
OSPL-11799 Dbmsconnect incompatibility with MS SQL 2017.
When using the Dbmsconnect service replication functionality with MS SQL 2017 triggers do not function as expected.
Solution: The fault in the replication functionality in dbmsconnect is now fixed.
OSPL-11593 / 18657 Potential race condition between threads creating/deleting DomainParticipants.
When an application has multiple threads that are creating and deleting participants into the same Domain in parallel, a race condition may occur between the thread deleting the last participant to that Domain (and thus implicitly deleting the Domain itself) and a thread that is creating a new participant to that Domain (and thus implicitly creating the Domain that is in the process of being deleted).
Solution: The race condition has been removed by synchronizing threads that create and destroy DomainParticipants.
OSPL-11707 / 18758 Representation of IDL constants not correct in many language bindings of idlpp.
The representation of IDL constants in many of the language bindings of idlpp was incorrect. For example, negative numbers in in C/C++ could result in compilation warnings and in Java in compilation errors. Also the representation of an enumerated constant would result in compilation errors in ISOCPP2, C# and Java.
Solution: Representation for constants has been corrected in all the above languages.
OSPL-11124 / 18494 ISOCPP2 Topic proxies do not correctly handle their reference to the DomainParticipant.
When creating topic proxies of type dds::topic::AnyTopic or of type dds::topic::TopicDescription using the dds::topic::disover_all() function, then releasing the participant before releasing the proxies themselves results in a NULL pointer dereference and a subsequent crash.
Solution: All Topic proxies now correctly manage their reference to the participant so that they keep the participant alive for as long as the proxy is alive itself.
OSPL-11289 / 18567 Idlpp generates ISOCPP2 code that may not compile for an Array with elements that are a typedef to some other type.
When using the ISOCPP2 backend of idlpp to compile an IDL model that has an array with elements that are a typedef to some other types, the generated C++ code will not always compile with a C++ compiler.
Solution: The ISOCPP2 backend has been modified to generate C++ code that does compile correctly for these types of IDL constructions.
Showing 1 to 10 of 17 entries

OpenSplice v6.9.1p1

Fixed Bugs and Changes not affecting API in OpenSplice 6.9.1p1

Report IDDescription
OSPL-11643 Failure to retrieve the participant key from the builtin topic data of publishers and subscribers in isocpp2
When the participant_key() is requested, the template TPublicationBuiltinTopicData returns the key of the delegate instead of the participant_key() of the delegate. Consequently, the wrong key is returned. The same issue existed for TSubscriptionBuiltinTopicData.
Solution: The participant_key() of the delegate is returned instead of the key().
OSPL-11624 Perfectly valid combination of DataStates incorrectly rejected.
When an application indicates that it wants to read data that is either ALIVE or NOT_ALIVE_NO_WRITERS, it should create a DataState object that has both states set by invoking .with(InstanceState.ALIVE).with(InstanceState.NOT_ALIVE_NO_WRITERS). However if an application defines a state like this, the combination is incorrectly rejected by throwing an IllegalArgumentException with the message "Invalid InstanceState".
Solution: The combination of an ALIVE state with either a NOT_ALIVE_NO_WRITERS_STATE or NOT_ALIVE_DISPOSED_STATE is no longer rejected.
OSPL-11601 Compile warnings in Corba-Java code generated by idlpp
When compiling classes generated by idlpp in Corba-Java mode, with a recent compiler, warnings occur because certain classes, inheriting from a Serializable base-class, lack the required serialVersionUID member. Note that it is auto-generated if missing, so does not cause any runtime issues. It does trigger compile warnings if compiled with -Xlint:all.
Solution: The serialVersionUID was added to relevant idlpp templates so that generated code can be compiled warning-free.
OSPL-11600 ignal handler may crash when running into deregistered ExitRequestCallback
The signal handler used within OpenSplice allows services/applications to register one or more ExitRequestCallbacks that will be executed when an ExitRequest is received. When an exit request is received after one or more ExitRequestCallbacks have been deregistered, the signalHandler may crash. This happens for example when a service needs more time to terminate gracefully than the spliced has been configured to wait for. When the configured time expires without graceful termination, the service will receive a SIGTERM from spliced after it has already deristered its ExitRequestCallback and subsequently crash.
Solution: The algorithm to remove deregistered ExitRequestCallbacks has been corrected to no longer crash in these circumstances.
Showing 1 to 4 of 4 entries

OpenSplice v6.9.1

Fixed bugs and changes not affecting the API in OpenSplice 6.9.1

Report IDDescription
OSPL-11629Missing historical data when using alignee is ON_REQUEST
On wait_for_historical_data the reader generates a HISTORICAL_REQUEST event. When the durability service receives this event is sends out a historical data request. The durability service never received the event due to an error in the internal waitset implementation, it was not thread safe.
Made the internal waitset thread safe
OSPL-11595Wrong master selected for federations with same master priority
Master selection is done based on 3 variables, namely in priority order: master priority, store quality and systemId. I was possible that on federations with same master priority and no store the federation with the lowest systemId was selected as master because the store quality was set to a random value instead of zero.
Set the store quality to zero when no store is used
OSPL-11236 Instance purged before auto-purge delay is expired.
When using an autopurge_nowriter_samples_delay or autopurge_disposed_samples_delay in a DataReader QoS, an instance can be purged before the delay is expired. The time comparison to determine if a delay is expired, uses the monotonic clock implementation. This means an invalid time-stamp can be returned if the "current" monotonic time is less than the autopurge delay. The monotonic time represents time since an unspecified starting point, which is often the time the system is started, so the issue occurs when the system uptime is less than the autopurge delay.
Solution: The issue was fixed by checking for an invalid timestamp, before using it's value to calculate if the instance should be purged.
OSPL-11559 Missing release note in V6.9.0 release
In our V6.9.0 release we fixed an issue and forgot to add a release note. We now added this release note for ticket OSPL-11502 to our V6.9.0 release notes
OSPL-11503 Potential Deadlock releasing Python DDS entities
The DDS Python API method that releases DDS entities (dds.Entity.close) would occasionally hang the main Python thread. This could only occur if a child entity registered a 'listener', and that listener gets triggered as part of closing the entity.
Solution: The deadlock has been removed.
OSPL-11268 IDLPP Python language binding ignored Constant declarations
In release 6.9.0, the Python binding for IDLPP ignored Constant statements.
Solution: In release 6.9.1, IDLPP now generates appropriate Python declarations.
OSPL-11130 Python API: unable to create data readers and writers directly from a participant
In release 6.9.0, the Python API did not support creating data readers and writers directly from a domain participant. Instead, the user had to create a publisher (data writers) or subscriber (data readers), first.
Solution: The dds.DomainParticipant now has methods create_datareader and create_datawriter
OSPL-11264 Python API: Simplified method to find and use existing topics
In release 6.9.0, multiple steps were required to find an existing topic, and setup it up so that a Python program could read and write data from that topic.
Solution: A new method, ddsutil.find_and_register_topic has been created. It is a one-step solution to finding and locally registering an existing topic.
OSPL-11269 Improved type checking in Python code generated by IDLPP
In release 6.9.0, IDLPP for the Python language generated code that had little type checking. It was possible to set field values to inappropriate data types and/or value ranges.
Solution: Type checking in generated Python code has been improved. Python exceptions are now thrown if inappropriate values are set.
OSPL-11271 Full support for listeners in DCPS API for Python
The 6.9.0 release of the DCPS API for Python included support for the on_data_available listener, only.
Solution: All listener methods have been implemented.
Showing 1 to 10 of 16 entries

OpenSplice v6.9.0

Fixed Bugs and Changes not affecting API in OpenSplice 6.9.0

Report IdDescription
OSPL-11245License checks are inconsistent
Licensing of components is not consistently checked which may result in improperly counted licenses
Solution: License checking has been improved
OSPL-11160Qos mismatch between OpenSplice and Lite using the C99 API
A Writer does not match a Reader and vice versa when the Topic QOS's are configured as "RELIABLE" and "TRANSIENT_LOCAL" this causes late joiners not to receive any samples although the Topic Durability QOS was set to "TRANSIENT_LOCAL" and the Topic Reliability QOS was set to "RELIABLE" on both sides.
Solution: The QoS mismatch is fixed and OpenSplice and Lite now can communicate correctly using the C99 API
OSPL-11199Idlpp spins indefinitely when compiling an IDL data struct with indirect recursion
When you try to compile an IDL data model that has indirect recursion (i.e. a datatype has a reference to another datatype that eventually refers back to the original datatype), the IDL compiler starts to spin indefinitely, trying to walk through the recursive cycle over and over again.
Solution: The algorithm used to handle recursive datatypes has now been modified to also support indirect recursion in a correct way.
OSPL-11157Installer asks for License file even if user declines to supply one
Installation process asks for License file even when user answers N to providing license file.
Solution: The installation process will no longer ask for an existing license file if the user declines to supply one.
OSPL-11028 Python language support for IDLPP
The Python language binding shipped in Vortex OpenSplice 6.8.3 did not support compilation of IDL into Python classes. Instead, the binding provided a method for dynamically creating Python classes, given an IDL file. While dynamic generation of Python classes is functionally equivalent to having IDLPP create Python code, source-code aware editors can provide better content assistance while editing if they have access to source code.
Solution: IDLPP now supports a python language binding:
idlpp -l python [other-idlpp-options] idl-file
OSPL-11026 Using topics defined in other DDS applications
In Vortex OpenSplice 6.8.3, the Python binding for DDS did not allow a Python application to access a topic without having access to the IDL defining that topic.
Solution: The Python binding now supports a mechanism for registering a topic found via DomainParticipant.find_topic as a local topic. A local topic is a topic for which locally defined Python classes exist. The process for creating a local topic from a found topic are illustrated in the following example:
dp = dds.DomainParticipant()
found_topic = dp.find_topic('OsplTestTopic') # defined by Tester
local_topic = ddsutil.register_found_topic_as_local(found_topic)
gen_info = ddsutil.get_dds_classes_for_found_topic(found_topic)
OsplTestTopic = gen_info.get_class(found_topic.type_name)
# proceed to create publishers, subscribers, readers & writers by referencing local_topic
OSPL-11135Inconsistent treatment of character data in Python binding
In the Vortex OpenSplice 6.8.3 beta of the Python binding, Python properties corresponding to IDL fields of type 'char' and 'string' were inconsistently treated. Sometimes, they would accept, expect or return a Python bytes value. At other types, a Python str (string) would be used.
Solution: Treatment of IDL string and char values has been standardized as mapping to Python str values. You should always use a Python str when writing such properties, and always expect a str to be returned by such properties. For arrays or sequences of IDL char, the Python equivalent is a list of str, where each Python string is exactly one (1) character long.
OSPL-11238 QoS parameter mandatory on some Python APIs
The beta version of the Python DCPS API (Vortex OpenSplice 6.8.3) included several methods where quality of service (QoS) parameters were mandatory. This was inconsistent with other methods, where you could rely on appropriate defaults being generated.
Solution: All QoS parameters in the Python DCPS API have been made optional, and, if not provided, then an appropriate default is used.
OSPL-11248Python API has no way to explicitly delete entities
The beta version of the Python DCPS API (Vortex OpenSplice 6.8.3) did not provide a mechanism for explicitly deleting entities. Instead, all entities were release at the end of the Python session.
Solution: A close method has been added to all entity classes (DomainParticipant, Publisher, Subscriber, Topic, DataReader and DataWriter), that will explicitly release the entity, and all it's child entities.
OSPL-11248 Python support for DCPS built-in topics
The beta version of the Python DCPS API (Vortex OpenSplice 6.8.3) did not include support for built-in DCPS topics.
Solution: Vortex OpenSplice 6.9.0 release includes support for the built-in topics. Because the DCPS topics are pre-registered in every domain, you may find that using 'over-the-wire' topic discovery to use these topics. See the documentation ddsutil.find_topic, ddstuil.register_found_topic_as_local and ddsutil.get_dds_classes_for_found_topic as well as the Python DPCS API Guide.
Showing 1 to 10 of 11 entries

Related

Release NotesOpenSplice
twitter logo linkedin logo
News and Events
Copyright © 2022 ZettaScale Technology Ltd. All Rights Reserved