New versions of OpenSplice are released on a regular basis. This page lists all the known issues for OpenSplice V 6.10.x
You may also want to read the following:
Known Issues in OpenSplice V6.10.x
Report ID. | Description |
---|---|
OSPL-13076 | Query condition on union. We do not support queries on unions. A possible workaround is doing a query on the discriminant of the union (since that field is always available), but you cannot address the branch of a union since its availability is dependent on the discriminant of the sample you are examining, and when compiling a query (which happens offline) this information is not available. |
OSPL-13016 | Instance liveliness state is not always correct for late joiners. The liveliness state of instances may be incorrect for late joiners in case the live writers exist for an instance without having data for that instance. This can occur when multiple publishers exist where one writer overwrites data from another writer or when a writer uses lifespan policy and the data has expired. In these cases, the liveliness of the writer will not be detected until the writer updates the instance, once updated the liveliness will be correct again. |
OSPL-12891 | Warnings when communicating between Vortex OpenSplice and Vortex Lite When communicating between Vortex OpenSplice and Vortex Lite and using the Durability service 2 warnings will occur in the ospl-info log file. * Detected Unmatching QoS Policy: 'Liveliness' for Topic d_historicalData * Detected Unmatching QoS Policy: 'Liveliness' for Topic d_historicalDataRequest These warnings can be ignored as the readers/writers for these topics are functioning correctly. Fixing these warnings would introduce QoS incompatibility in an already deployed environment so we choose not to fix this. |
OSPL-12739 | get_discovered_participants is incompatible with application use of the builtin subscriber and potentially leaks memory. There are problems with DomainParticipant get_discovered_participants: * It uses the built-in subscriber's data reader for DomainParticipant, but if the application uses this reader for its own purposes and performs takes on it, get_discovered_participants will only return handles for the ones left in there by the application. * It returns dead participants, which is not really what one wants, because the disposed instances are not garbage collected a simple use get_discovered_participants causes a memory leak. Workaround: Build directly on data readers for the built-in topics that get created inside OpenSplice at most once in the lifetime of a node and its domain participant, and read from these. In OpenSplice there is nothing special about the "built-in" subscriber, or about data readers for a built-in topic, so this is completely supported. It is possible to avoid having to garbage collect participants/readers/writers that once existed but have since been removed by setting the "autopurge_disposed_samples_delay" QoS to 0 on these readers. A code example on how to do this can be found here |
OSPL-11144 | SignalHandler has small chance of deadlocking. The signalHandler released in V6.8.3 has a small chance of getting deadlocked, when the thread that is handling an ExitRequest was just busy registering a new handler. This is because the the function registering a handler uses a mutex to protect the handler administration, while the signalHandler needs the same mutex when iterating through the handlers. Right now, the chance of the signalHandler deadlocking as a consequence is pretty small, since most handlers are set right at the start of the application and that is not typically a time in which ExitRequest signals are sent to such an application. |
OSPL-11136 | DDS block compile errors with Simulink Coder and VS 2015 A Simulink model compiled with Simulink Coder and with Visual Studio 2015 will fail with a compilation error: fatal error C1083: Cannot open include file: 'dds.h': No such file or directory This is due to an underlying change in how NMAKE treats quoted environment variables. Work around: 1) Start MATLAB 2) From the Home menu, choose Add-ons > Manage Add-ons 3) Select the Open Folder Vortex_DDS_Block_Set (R2017a), or choose Open Folder from the Vortex_DDS_Block_Set menu options menu (three vertical dots) (R2017b) 4) open the file rtwmakecfg.m 5) Change line 11 to be as follows: osplhome = getenv('OSPL_HOME'); 6) Save the rtwmakecfg.m file and rebuild your simulink model. |
dds184 | Query parser doesn't support escape characters The internal OpenSplice DDS query parser does not support escape characters. This implicates that specific tokens cannot be used in query expressions (like for instance SQL keywords 'select', 'where', etc). Impact at API level: Topics with a SQL keyword as name cannot be created QueryCondition expressions cannot refer to datafields with SQL keyword as name ContentFilteredTopic expressions cannot refer to datafields with SQL keyword as name |
4508 dds206 | typeSupport with invalid type name causes crash during register_type When a type support object is created with an type name which is not known in the meta database, the following register_type function crashes. |
dds492 | idlpp cannot handle same struct in a struct or forward declarations to structs The following (faulty) idl code generates a 'floating point exception', instead idlpp should not allow such constructs. struct TestStruct; struct TestStruct{ long x; TestStruct someEnum; string val; }; The following idl also fails (the forward declaration to the TestStruct is not correctly processed): struct TestStruct; struct TestStruct1{ TestStruct y; }; struct TestStruct{ long x; }; with the error: ***DDS parse error TestStruct undefined at line: 4. The following idl construct is not allowed, however the IDL preprocessor does not give a clear error: struct TestStruct; struct TestStruct1{ TestStruct y; }; struct TestStruct{ TestStruct1 x; }; |
4821 dds494 | SQL RelOp like not supported Using the SQL relational operator like is not supported. |
dds1117 | Implicit unregister messages can corrupt copy-out functions On all language bindings there are methods that only use the key fields of a sample, as for example the register, unregister and dispose methods. However, currently the complete sample (including the non-key fields) need to adhere to the IDL to language mapping rules, as all fields are validated. This means that when a sample contains garbage data in its non-key fields, the sample could be rejected and the application might even crash in case of dangling pointers (segmentation fault). The work-around is that no values should be initalised to NULL values, no values should contain dangling pointers, all unions should explicitly be initialized to a valid value and any enumeration value should remain within its bounds. |
dds1696 | Limitations for output directories for ospl_projgen on Integrity ospl_projgen will generate projects which will build incorrectly if it is supplied an output directory ( -o option ) in which the final part of the path matches the name of one of the address spaces being generated. e.g. ospl_projgen ... -t mmstat -o path/mmstat These projects appear to build correctly however the final image will be incorrect. Other names to avoid currently are inetserver, ivfs_server, ResourceStore, spliced, networking, durability, pong, ping1, ping2, ping3, ping4, ping5, ping6, shmdump, Chatter, Chatter_Quit, MessageBoard, UserLoad |
dds1711 | Warnings when compiling with the Studio12 compiler There are still numerous warnings when using the Studio12 compiler. These can be ignored and will be tidied in future releases. |
dds2142 | Default buffer size used by networking may cause an error to be logged on Solaris9. On Solaris9 there may be an error in the ospl-error.log when the networking service is started: "setsockopt returned errno 132 (No buffer space available)" this is down to the udp_max_buf being to small. To find out what the system has it set to do /usr/sbin/ndd -get /dev/udp udp_max_buf and to set it larger do : /usr/sbin/ndd -set /dev/udp udp_max_buf |
dds3276 | Tester - Reconnection to shared memory OpenSplice domain on Windows fails On Windows, when trying to reconnect to a running domain of OpenSplice DDS that utilises shared memory the reconnection will fail. Workaround: Restart OpenSplice Tester. |
dds2260 / OSPL-259 / OSPL-6304 | dlpp cannot handle recursive sequences The idlpp tool is not able to cope with recursive sequences: module example { typedef sequence NameList; struct DataType { sequence nameLists; }; }; |
OSPL-973 | Partitions with wild-cards don't work properly in all cases The PartitionQosPolicy for Publisher and Subscriber entities can contain two types of values. An absolute value that specifies a partition or a partition expression i.e. a name containing wildcard symbols '?'' and/or '*'. A partition expression is locally used by the Entity to discover matching absolute partitions to build up connections. Entities react on the creation of new partitions and those that match the partition expression are connected. Unfortunately information about newly created remote partitions is not distributed at this time. This means no matching can be performed to determine if the remote partition must be instantiated locally. As a result Subscribers and Publishers that use wild-cards in partition expression won't connect to partitions that are not explicitly created in the local application (when running in single process mode) or local node (when running in federated mode). As a workaround, all partitions that need to match must be explicitly mentioned in the PartitionQosPolicy. |
OSPL-2542 | 64 bit stack space issues with the JVM Newer versions of JDK (at least 1.6u43 and 1.6u45) run out of stack space on 64 bit platforms. Using a larger default StackSize would impact all non-Java applications too, and is therefore undesirable. Try increasing StackSize to 128000 bytes if you're experiencing problems with using listeners from Java on 64 bit platforms. |
OSPL-2696 | Merge policy behaviour Merging of different data-sets after a re-connect only works when the disconnect takes less than the service_cleanup_delay value of the Topic(s). Otherwise it is not possible for the middleware to determine whether instances that are available on one side and not on the other have been disposed or created during the disconnect. If a re-connect takes place after a period larger than the configured service_cleanup_delay, data-sets on both sides may be different after the merge-policy has been applied. One should carefully consider the merge-policy configuration for all federations in the system as a whole as not all combinations make sense. Consider the example of a two-node system. The following configurations semantically make no sense: Configuring REPLACE as policy on both sides. Combining REPLACE as policy on one side and MERGE on the other side. Combining REPLACE as policy on one side and DELETE on the other side. Combining DELETE as policy on one side and MERGE on the other side. The wait_for_historical_data() call does not block while performing a merge due to the configured merge-policy. This means it is currently not possible to block an application until the merge has completed. |
OSPL-4891 | RMI Java/C++ incompatibility RMI Java and RMI C++ will not communication with each other due to internal topic names mismatch. |
OSPL-5885 | DDSI message verification Verification of incoming messages is quite strict, but enum values embedded in the data are not checked against the set of valid values. |
OSPL-6080 | Crash duration termination by a signal There is a small chance that a process crashes when it is terminated by a signal. In this situation the termination sequence will first disable the API so that the application will no longer be able to access entities and then free all resources. The problem is that the termination sequence should wait until all ongoing operations, which started accessing entities before the API was disabled, have finished before freeing resources otherwise they may access freed memory and cause a crash. This problem can only occur when the termination sequence starts during entity create and enable operations. |
OSPL-6152 | Tuner doesn't accept the name of the domain anymore (for connecting) Other than in 6.4, the current 6.5 version doesn't allow the name of the domain to be specified anymore as a means to connect to that domain, i.e. using "ospl_shmem_ddsi_statistics_rnr" as the URI rather than the (working) integer value (0) or the xml-config-file URI. |
OSPL-6233 | DCPS API unregister_instance with timestamp before most recent register fails A call to unregister_instance (on an existing instance) with a timestamp prior to the most recent registration for that writer is handled incorrectly: the group instance correctly detects the unregister_instance operation should be ignored, but the writer nonetheless incorrectly removes the instance from its own adminstration. |
OSPL-6901 | Behavior of built-in topic part of ISOCPP2 DCPS API not working with GCC <= 4.3 Builds that have been generated with GCC version <= 4.3, will not be able to use the built-in topic part of the ISOCPP2 DCPS API due to issues with dynamic casting in the combination of the API and the compiler. |
OSPL-6974 | Group coherency during durability alignment When in a running system an end of a group coherent update takes place at the same time a late joining Node is aligning historical data there is a chance that a group coherent update is partially lost by the late joining node. The problem is caused by the sending node that not yet treats a group coherent update as an atomic change, in this situation the sending node will partially aligns data as a coherent update and partially aligns data as completed, the receiving node will not be able to detect completeness of the whole and eventually discard the part of the data that was send as a coherent update. |
OSPL-7244 | Tuner does not support writing samples containing multi-dimensional collections Currently, the Tuner tool does not support editing multi-dimensional sequences (in IDL, sequence |
OSPL-7299 | DDSI2(E) and RTNetworking ssl/crypto versions are incompatibile/not-available on some linux platforms DDSI2(E) and RTNetworking services may report "error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory" on some platforms even when ssl is installed properly due to a difference in ssl setups between linux distributions. A workaround for this is creating symbolic links in /lib for libssl.so.10 and libcrypto.so.10 that point to your libssl and libcrypto libraries on your system. |
OSPL-7382 | Incorrect instance state after group coherence update and deletion of writer If a group scope coherent writer publishes a coherent set and then is deleted while a group scope coherent reader has locked its state by having called the begin_access operation it is expected that the coherent set will become available as soon as the end_access operation is called by the reader and that the state of the reader's instances has become NO_WRITERS (assuming that the writer was the only writer). In the current implementation it is possible that the update to the NO_WRITERS state is not detected and that the instance state of the reader's instances remain alive. libcrypto libraries on your system. |
OSPL-7387 | Coherent updates that hit the DataReader's max_instances or max_samples_per_instance resource limits are always aborted and can leak memory. When a coherent update hits a Reader's max_instances or max_sample_per_instance limit it will be aborted. Aborting a transaction when running into these resource limits is only required when the Reader cannot solve this resource by taking action i.e. when releasing resources solely depends on arrival of data, however analysis of the actual situation and determining what to do is, in the current implementation, too costly so for now we always abort. Aborting can also leak some resources within the transaction administration because some dependencies may still be unknown at abortion, which is considered acceptable for the time being and for as long hitting resource limits only occurs occasionally. Advised is to avoid hitting these resource limits either by not setting them or to assure normal operation never reaches these limits |
OSPL-7707 | Coherent updates not working properly i.c.w. QoS changes. When modifying the QoS-ses of existing publishers, writers, subscribers or readers that are involved in an ongoing coherent update, readers may not receive this coherent update in all cases. Users are advised to refrain from changing the QoS-ses of existing entities. If different QoS-ses are required, the involved entity should be deleted and re-created instead. |
OSPL-8008 | C++ global const struct copy failure on g++ 4.1.1 When using DDS C++ pre-defined const structures (like DDS::DURATION_INFINITE, DDS::TIMESTAMP_CURRENT, etc) with g++ 4.1.1, assignments can fail. It will fail only when assigning it to a global const variable, which is then assigned to a variable within a function. DDS::Duration_t globalStruct = DDS::DURATION_INFINITE; const DDS::Duration_t globalStructC = DDS::DURATION_INFINITE; void main(void) { const DDS::Duration_t localStructC = DDS::DURATION_INFINITE; DDS::Duration_t copy_localStructC = localStructC; DDS::Duration_t copy_globalStructC = globalStructC; DDS::Duration_t copy_globalStruct = globalStruct; } The content of copy_globalStructC does not equal DDS::DURATION_INFINITE but is zero. The other structures don't have this problem. Easiest workaround is by initializing globalStructC differently: const DDS::Duration_t globalStructC = {DURATION_INFINITE_SEC, DURATION_INFINITE_NSEC}; |
OSPL-8507 | Transactions with explicit registration messages never become complete. When explicitly registering one or more instances during a coherent update by calling the register_instance() or register_instance_w_timestamp() operations on one or more datawriters, the coherent update will never become complete for datareaders that do not participate in the same federation. The workaround for this is to never explicitly register instances in a coherent update from within application code, but instead rely on implicit registration of instances by the middleware. |
OSPL-8665 | DDS wait_for_historical_data_w_condition operations cannot deal with partial coherent sets The wait_for_historical_data_w_condition() operation cannot deal with alignment of partial coherent sets, which basically means that the operation cannot be used in environments where coherent updates are used. Solving this issue requires an alternative implementation of the condition because the current implementation creates a query by splitting the condition into a list of instance (key) queries and sample (non-key) queries but transactions have no knowledge of instances and samples, for these the query can only rely on the messages so a different kind of implementation is required. A workaround is to stick to wait_for_historical_data() and have the application filter out the desired data |
OSPL-8673 | Possible release claim and memory leak when receiving transaction twice When a federation starts and receives part of a transaction via the normal path and durability aligns the complete transaction afterwards, the part of the transaction that is received twice may leak memory (as long as the reader/group lives) and also 'leaks' the resource claims the samples have done, potentially causing delivery of new samples to be denied because of ResourceLimits that have been set (only if they are used). If a second transaction is completely received after first transaction then this is not a problem. |
OSPL-8768 | RMI Java multithread service priority not working on PERC When using multi-thread server policy for RMI Java on PERC, then the requests are not handled in the expected order. For instance, when you have different priorities, then the higher priority requests should be handled before the lower priority ones. This doesn't happen. The problem is that, for request handling order, RMI Java uses the native java.util.concurrent.PriorityBlockingQueue. This does not work properly on PERC. |
OSPL-9113 | Non-coherent reader may not be aligned with data from an unfinished transaction Under certain circumstances a non-coherent reader may not receive historical data from an unfinished transaction. When a non-coherent reader is created and there is a unfinished transaction present for which no writer exists anymore (thus the transaction will never become complete) then this non-coherent reader may not receive the historical data from this transaction. |
OSPL-9595 | Potential deadlock when deleting publisher/subscriber with open begin_access call When after the begin_access and before end_access a publisher or subscriber is deleted from the common factory participant a deadlock can occur when the participant receives an asynchronous internal event. Advised is to not delete entities from the participant while access is started |
OSPL-9358 | Mastership handover not supported when configuring different master priorities. With the master_priority setting it is possible for a late joining fellow with a higher priority to become master. To recognise this situation the original node should give up its mastership as soon as it hears about the presence of a node with a higher priority. The conditions to recognise such a situation are currently not implemented yet |
OSPL-9612 | ncomplete solution for OSPL-9433 The solution implemented for OSPL-9433 is not complete. There is no support for using RTNetworking with more than 1 channel with that solution. There is furthermore a small window when a node (re)connects at the time when durability finishes merging. In that case it is possible that the purge-suppression isn't effectuated properly. Advised is to not delete entities from the participant while access is started |
OSPL-9882 | Linux: MATLAB/Simulink hangs when connecting to shared memory domain On Linux, a MATLAB script or Simulink model connecting to a Vortex OpenSplice domain via shared memory will hang. Resolution:MATLAB, like Java applications requires that the environment variable LD_PRELOAD be set to reference the active Java installations libjsig.so library. The MATLAB user interface uses Java, and thus requires the same signal handling strategy as Java applications connecting to Vortex OpenSplice. The precise syntax for setting the LD_PRELOAD environment variable will depend on the shell being used. For Oracle JVMs, LD_PRELOAD should contain this value: $JAVA_HOME/jre/lib/amd64/libjsig.so |
OSPL-9919 | No QoS XML file validation in MATLAB integration Currently users are allowed to specify an invalid QoS file for a block type. The user can set the QoS block parameter by selecting QoS XML files for all DDS block types. There is no validation to ensure that there is a correct entry for the DDS block type. For example, on a DataReader block, it is possible to select a QoS profile with no policies for datareader_qos. There is no warning on the setting of invalid QoS block parameters, but it can fail at run time. Verify manually the QoS XML file contains the correct QoS policies for the block type. |
OSPL-10006 | MATLAB aborts on Exit after connecting to DDS. On exit, MATLAB will report a segmentation violation if all of the following conditions at satisfied: MATLAB is run from a Linux system, the OSPL_URI environment variable refers to a Single Process configuration of Open Splice, the MATLAB instance has connected to OpenSplice via the Vortex DDS Block Set for Simulink. The segmentation fault is reported because Open Splice installs exit handler that gets unloaded from memory before it is called. On Linux systems, the exit handler does nothing. Other MATLAB threads appear to continue to shutdown normally. Impact: here is no impact on system integrity or MATLAB each MATLAB exit can produce a file in the user's home directory starting with 'matlab_crash_dump'. These files can safely be deleted to reclaim disk space. |
OSPL-10018 | MATLAB: Shared Memory Database Address on Windows needs to be changed from default On a Windows 64-bit system, an OpenSplice system configured with Shared Memory, MATLAB cannot connect to the OpenSplice domain if the Shared Memory Database Address is set to its default value of 0x40000000. The error log (ospl-error.log) will show entries such as: Report : Can not Map View Of file: Attempt to access invalid address. Internals : OS Abstraction/code/os_sharedmem.c/1764/0/1487951812.565129500 As workaround use the configuration editor to change the default data base address. Use the 'Domain' tab, and select the 'Database' element in the tree. If necessary, right click the Database element to add an 'Address' element. Change the address. In general, a larger number is less likely to be problematic. On a test machine, appending two zeros to the default address allowed for successful connections. |
OSPL-10075 | Tuner pre 6.6.0 is not forward compatible with cmsoap of 6.6.0 and newer When trying to connect a Tuner with version <6.6.0 to a cmsoap service with version >=6.6.0, the connection will fail. The cmsoap service will trace errors like Creation of qos failed and Unexpected opening tag. |
OSPL-10242 | DDSI2 support for Cloud/Fog fault-tolerance Cloud/Fog only forward discovery information for (proxy) endpoints, but DDSI2 internally requires the existence of (proxy) participants as owners of these endpoints. Because of this, DDSI2 infers the existence of such participants, creating them just-in-time when the first endpoint is discovered, and deleting them automatically when the last endpoint of that participant is deleted. When a Cloud/Fog node disappears, these proxy participants become detached and start a lease of Discovery/DSGracePeriod. If another Cloud/Fog node takes over before the lease ends, the lease is reset and the proxy participant attached to the new Cloud/Fog node; conversely, if the lease ends before this happens, the proxy participant is deleted along with its endpoints. In the particular case where the Cloud/Fog node taking over has no knowledge of some of these endpoints, e.g., because an endpoints was deleted by its application while the entire cluster of Cloud/Fog nodes was being restarted, the proxy participant will be attached to the new Cloud/Fog node, but there will never be a notification that the proxy endpoint corresponding to the deleted endpoint should also be deleted. Thus, it is strongly advised to either: (1) have a true fault-tolerant setup of Cloud/Fog nodes; or (2) set DSGracePeriod to 0 and never restart a Cloud/Fog cluster within the lease duration of the services. |
OSPL-10396 | IDL containing nested typedef'd sequence is not handled for IDLPP language targets c and c99. Having a sequence parameterized by a typedef'd sequence is not supported. In IDL this would look like typedef sequence struct Adventurer { long id; walk favourite_walk; sequence }; IDLPP would output some error "idl_seqLoopCopy: Unexpected type". A workaround would be to fully expand out nested sequence definitions and typedef them like, typedef sequence typedef sequence struct Adventurer { long id; walk favourite_walk; walk_seq recent_walks; }; |
OSPL-10408 | uner support for group coherence - writer cannot be created with QoS history KEEPLAST If the user attempts to create a writer for a publisher with group coherence settings, the writer creation will fail, if the history QoS for the writer is KEEPLAST. The error message does not give a clear indication as to why the writer create is failing. The user must manually set the WriterQoS History policy to KEEP_ALL. |
OSPL-10418 | Simulink Coder - MS Visual Studio Solution file builds fail with 'cannot open file dcpsc99.obj' Simulink Coder builds of models that use the "grt.tlc - Create Visual C/C++ Solution File for Simulink Coder" target will fail on build with the following error: error LNK1104: cannot open file 'dcpsc99.obj' A workaround is to use a "grt.tlc - Generic Real-Time Target" instead. |
OSPL-10434 | Simulink Coder does not support spaces in OSPL_HOME value On the Windows platform, C compilation or linking errors will result when Simulink Coder performs a build of a model using the Vortex DDS Block Set, and, when the OSPL_HOME environment variable's value contains a space. MATLAB and Simulink Coder do not support spaces in paths when performing C compilation and linking on Windows. The workaround is to use Windows 'short filenames'. For example, instead of having OSPL_HOME contain Program Files, the path could be set to contain the windows short path equivalent, which would be something like PROGRA~1. The release.bat script already does this, but users should take care to do the same if setting OSPL_HOME manually. If the Vortex DDS Block Set detects a space in OSPL_HOME during a Simulink Coder build, a warning will be issued. |
OSPL-10485 | onnecting to DDS from within MATLAB or Simulink can result in errors logged to ospl-error.log and warnings to ospl-info.log, and in the durability service not starting. This is known issue with MATLAB. MATLAB ships with a private version of libstdc++.so.6 which may be older than the one provided by the host Linux operating system. The following MathWorks support document describes how to resolve the problem: https://www.mathworks.com/matlabcentral/answers/329796-issue-with-libstdc-so-6 |
OSPL-10615 | Running Launcher in Debian 9 environment fails with long installation directory In the Debian 9 environment, the Vortex OpenSplice Launcher tool will fail to open if the Vortex OpenSplice installation directory path is too long. (Exception snippet: /usr/lib/X11/locale/libpackager.so not found) Workaround 1: Install Vortex OpenSplice as superuser. Workaround 2: Shorten the Vortex OpenSplice installation directory path. |
OSPL-10678 | he generated copy routines for C# unions do not support cross-compilation The code generated by idlpp to support C# unions is host platform dependent. The code generated by idlpp for a C# union is either 32 bit or 64 bit depending on which version of OpenSplice is installed on the host platform. Thus cross-compile of the generated C# union code is not possible when host and target do not match. |
OSPL-12380 | IDL Include statement using linux style relative path is not stripped correctly on Windows machines When using an IDL file with an include statement containing a relative path, like this: #include "dds/Component1/Foo.idl" then depending on the compiler options applied, it should either strip the relative path from the generated output, or it should hold on to this relative path. However, if one chooses to strip the relative path, then on Windows the path is not successfully stripped, because the stripping function is only looking to tokenize on the Windows separator ('\') and ignores the Linux separator. That means Windows does not recognize the prefix in the include statement as a path, and therefore it doesn't strip it correctly. |
OSPL-13037 | DDS Security can get un-secure when using the Durability Service DDS Security has no knowledge of the Durability Service. This means that if the Durability Service topics are not secure, data of transient secure topics can be send un-secure over the wire through the Durability topics. More information and workarounds can be found in the DDS Security documentation. |
TSTTOOL-266 | Tester cannot edit recursive message fields in protocol buffer samples. When editing a sample belonging to a topic type defined by protocol buffer, if the defining message type contains a field that is the same message type as its parent, the fields in question don't appear in the data model. This renders them non-editable in the Sample Edit window, as well as when specifying their field names in scripting commands. |
TSTTOOL-181 | Scripting grammar does not allow multidimentional collections as FieldNames. When accessing user data fields in a sample read in via a script, the scripting language does not allow more than one collection index in the field name, as defined by this rule (taken from the scripting BNF found in the Tester user guide, appendix A): FieldName ::= ( "[" "]" )? ( ( "." FieldName ) )? The grammar specifies that the collection index can be present 0 or 1 time. So if one tries to create a parameter to a send or check instruction for a field called "array2D[0][0]", script compilation fails. |
TSTTOOL-186 | ester erroneously adds in an extra collection index to UserData, when populating the edit sample table with existing sample data containing sequences containing arrays. In the case where there is a parent sequence containing an array, or containing a struct that also contains an array further down the chain, the internal data model thinks there is valid data in the parent sequence index one greater that there actually is. For example, in a live sample in the system there contains the fields: sequence[0].array[0] = 0 sequence[0].array[1] = 1 sequence[1].array[0] = 10 sequence[1].array[1] = 11 But, when selecting that sample to edit it, the sample userdata is populated with the following fields defined: sequence[0].array[0] = 0 sequence[0].array[1] = 1 sequence[1].array[0] = 10 sequence[1].array[1] = 11 sequence[2].array[0] = 0 sequence[2].array[1] = 0 The parent sequence has the next sequence member already defined when it shouldn't have. |
TSTTOOL-187 | Confusing/erroneous table readouts of multidimensional collection user data in the sample editor. When using the sample edit window for editing multidimensional collections (eg. an integer matrix), the sample edit model does not properly account for the proper ordering of the collection indices, when assigning the field values to the field names. For example, a 2x3 matrix defined like this in application code: int[][] array2D = new int [2][3]; array2D[0][0] = 0; array2D[0][1] = 1; array2D[0][2] = 2; array2D[1][0] = 3; array2D[1][1] = 4; array2D[1][2] = 5; would look like this in the Tester sample edit window table: array2D[0][0]: 0 array2D[0][1]: 3 array2D[0][2]: array2D[1][0]: 1 array2D[1][1]: 4 array2D[1][2]: |
TSTTOOL-341 | Mismatch in handling of bounded character sequences between script send and script check. In a scenario script, given a topic that has an bounded sequence of characters in its type "b_seq", the following script code would fail the check: send test_SimpleCollectionTopic ( b_seq => abc ); check test_SimpleCollectionTopic ( b_seq=> abc ); Passing in non-indexed parameters (ie. treating the character sequence as a regular string in scirpt) for bounded character sequences is accepted for send, but not for check. |
TSTTOOL-343 | Tester's statistics browser can't display statistics information. Navigating to the statistics tab and attempting to view statistics information for DDS entities does not currently work. The Tester log file reports that the entities are not available. |
TSTTOOL-355 | Tester scenario scripts continue running if Tester is disconnected from OpenSplice. If Tester is disconnected from a domain while a scenario script is running, the scenario is not interrupted and continues to its natural completion. If the script contains commands to read/write samples, the result is the script will report a fail, and will spam NullPointerExceptions reports to the OSPLTEST log for each attempt to communicate with OSPL until the script runs its course. Workaround: After domain disconnection, and the script execution ends (either by ending naturally or by explicit stoppage by user), the script can be run again after domain re-connection with success. Be aware though that previously written transient or persistent samples will be read in again on reconnect and be input into the sample table again. If it is needed, the sample table can be cleared before executing the scenario again. |
Operating System/Platform Related Issues
- aarch64 / 64-bit ARM – Does not support LevelDB key-value store for durability due to architecture not supported by LevelDB-1.9. SQLite plugin is supported and LevelDB support will be restored in a future release after upgrading to a more recent LevelDB version.
- LynxOS 5 – Does not support multiple networking services using the same interface, this causes an error within the logs.
- VxWorks 6.8 – When creating the target kernel an extra module is required
- POSIX scheduling polices SCHED_FIFO/SCHED_RR/SCHED_OTHER support in RTPs (INCLUDE_PX_SCHED_DEF_POLICIES)
- VxWorks 6.7 – Usage of SIOCGIFCONF in the ioctl call from an RTP kernel causes kernel task tNet0 to crash.
- Fix : Upgrade 6.7 with Service Pack 1 (6.7.1), includes fix to WIND00162016
- VxWorks 6.5 – When launching the Spliced using WindRiver Workbench 2.6 as described in the OpenSplice Getting Started guide, there is an intermittent problem where the Spliced or the other OpenSplice services may not spawn. This problem does not occur when Spliced is deployed using the console command prompt.
- Workaround : rtp exec -a -p 100 -u 65536 -e “PATH=<path to OpenSplice services>” <path>/spliced.vxe file://<path>/ospl.xml &
- VxWorks 6.5 – There is an issue with the standard installation of VxWorks 6.5 in that the netmask passed from the VxWorks BootLoader may not take effect. This can cause problems with OpenSplice networking services apparantely not communicating as expected.
- Fix: This can be rectified with a patch from WindRiver: VxWorks 6.5 Point Patch for Defect WIND00102686 revC.
- Workaround: It is possible to hardcode the IP address and netmask into the kernel to workaround this. Enable INCLUDE_IFCONFIG and set IFCONFIG_ENTRY_1 to “gei0 192.168.1.1 netmask 255.255.0.0” (including the quotoes), for example.
- VxWorks 5.5 – When multicast is enabled a message of the format below will written to the error log file. This is under investigation and is caused by the IP_MULTICAST_TTL being set on a socket. The system default for IP_MULTICAST_TTL is always used and the value set in the OpenSplice configuration file is ignored.
- Report : ERROR
Date : THU JAN 01 00:00:00 1970
Description : setsockopt returned errno 22 (errno = 0x16)
Node : vxTarget
Process : networking <534898608 521237760>
Thread : networking (534917136 521237760)
Internals : V5.4.2p1/networking: setting multicast timetolive option – Please see release notes, known issue/nw_socketMulticast.c/201/0/249999990
- Report : ERROR