ZettaScale Knowledge Base ZettaScale Knowledge Base

  • Home
  • DDS Products
    • DDS Overview and Concepts
    • OpenSplice DDS
      • OpenSplice DDS
        • OpenSplice FAQ
        • Why OpenSplice DDS?
        • Installation
          • OpenSplice Licensing FAQ
        • Best Practice and Possible Errors
        • API and IDL
        • Configuration
        • Networking
          • DDSI
          • RT Networking
        • Durability Service
        • DDS Security
        • Logging
        • Databases and DBMS
        • Release Notes
      • OpenSplice Tools
        • Overview
        • OpenSplice Launcher
        • OpenSplice Tuner
        • OpenSplice Tester
        • Record and Replay Manager
        • MMStat
    • Cyclone DDS
  • Zenoh
  • Contact Support
Home / Known Issues in OpenSplice V6.8

Known Issues in OpenSplice V6.8

New versions of OpenSplice are released on a regular basis.  This page lists all the known issues for OpenSplice V 6.8

Fixed bugs and changes for OpenSplice V6.8 can be found on a separate page

Known Issues in OpenSplice V6.8


Report ID.
Description
dds184
Query parser doesn't support escape characters

The internal OpenSplice DDS query parser does not support escape characters. This
implicates that specific tokens cannot be used in query expressions (like
for instance SQL keywords 'select', 'where', etc).

Impact at API level:
  • Topics with a SQL keyword as name cannot be created


  • QueryCondition expressions cannot refer to datafields with SQL keyword as name


  • ContentFilteredTopic expressions cannot refer to datafields with SQL keyword as name


4508

dds206
typeSupport with invalid type name causes crash during register_type

When a type support object is created with an type name which is not known in
the meta database, the following register_type function crashes.

dds492
idlpp cannot handle same struct in a struct or forward declarations to structs

The following (faulty) idl code generates a 'floating point exception', instead idlpp should
not allow such constructs.

struct TestStruct;

struct TestStruct{
long x;
TestStruct someEnum;
string val;
};

The following idl also fails (the forward declaration to the TestStruct is not correctly
processed):

struct TestStruct;

struct TestStruct1{
TestStruct y;
};

struct TestStruct{
long x;
};

with the error: ***DDS parse error TestStruct undefined at line: 4. The
following idl construct is not allowed, however the IDL preprocessor does not give
a clear error:

struct TestStruct;

struct TestStruct1{
TestStruct y;
};

struct TestStruct{
TestStruct1 x;
};


4821

dds494
SQL RelOp like not supported

Using the SQL relational operator like is not supported.

dds1117
Implicit unregister messages can corrupt copy-out functions

On all language bindings there are methods that only use the key
fields of a sample, as for example the register, unregister and
dispose methods. However, currently the complete sample (including
the non-key fields) need to adhere to the IDL to language mapping
rules, as all fields are validated. This means that when a sample
contains garbage data in its non-key fields, the sample could be
rejected and the application might even crash in case of dangling
pointers (segmentation fault).

The work-around is that no values should be initalised to NULL
values, no values should contain dangling pointers, all unions
should explicitly be initialized to a valid value and any
enumeration value should remain within its bounds.

dds1696
Limitations for output directories for ospl_projgen on Integrity

ospl_projgen will generate projects which will build incorrectly if it is supplied an output
directory ( -o option ) in which the final part of the path matches the name of one of the
address spaces being generated.

e.g. ospl_projgen ... -t mmstat -o path/mmstat

These projects appear to build correctly however the final image will be incorrect.

Other names to avoid currently are inetserver, ivfs_server, ResourceStore, spliced,
networking, durability, pong, ping1, ping2, ping3, ping4, ping5, ping6, shmdump, Chatter,
Chatter_Quit, MessageBoard, UserLoad

dds1711
Warnings when compiling with the Studio12 compiler

There are still numerous warnings when using the Studio12 compiler. These can be ignored and will
be tidied in future releases.

dds2142
Default buffer size used by networking may cause an error to be logged on Solaris9.

On Solaris9 there may be an error in the ospl-error.log when the networking service is started:
"setsockopt returned errno 132 (No buffer space available)" this is down to the udp_max_buf being to small.
To find out what the system has it set to do /usr/sbin/ndd -get /dev/udp udp_max_buf and to set it larger do :
/usr/sbin/ndd -set /dev/udp udp_max_buf

dds3276
Tester - Reconnection to shared memory OpenSplice domain on Windows fails

On Windows, when trying to reconnect to a running domain of OpenSplice DDS
that utilises shared memory the reconnection will fail.

Workaround: Restart OpenSplice Tester.


dds2260 / OSPL-259 / OSPL-6304
idlpp cannot handle recursive sequences

The idlpp tool is not able to cope with recursive sequences:
module example {
typedef sequence NameList;

struct DataType {
sequence nameLists;
};
};


OSPL-973
Partitions with wild-cards don't work properly in all cases

The PartitionQosPolicy for Publisher and Subscriber entities can contain two types
of values. An absolute value that specifies a partition or a partition expression
i.e. a name containing wildcard symbols '?'' and/or '*'. A partition expression is
locally used by the Entity to discover matching absolute partitions to build up
connections. Entities react on the creation of new partitions and those that match
the partition expression are connected. Unfortunately information about newly
created remote partitions is not distributed at this time. This means no matching
can be performed to determine if the remote partition must be instantiated locally.
As a result Subscribers and Publishers that use wild-cards in partition expression
won't connect to partitions that are not explicitly created in the local application
(when running in single process mode) or local node (when running in federated mode).
As a workaround, all partitions that need to match must be explicitly mentioned in
the PartitionQosPolicy.

OSPL-2542
64 bit stack space issues with the JVM

Newer versions of JDK (at least 1.6u43 and 1.6u45) run out of stack space on 64 bit
platforms. Using a larger default StackSize would impact all non-Java applications
too, and is therefore undesirable. Try increasing StackSize to 128000 bytes if you're
experiencing problems with using listeners from Java on 64 bit platforms.

OSPL-2696
Merge policy behaviour
Merging of different data-sets after a re-connect only works when the disconnect
takes less than the service_cleanup_delay value of the Topic(s). Otherwise it is
not possible for the middleware to determine whether instances that are available
on one side and not on the other have been disposed or created during the disconnect.
If a re-connect takes place after a period larger than the configured service_cleanup_delay,
data-sets on both sides may be different after the merge-policy has been applied.

One should carefully consider the merge-policy configuration for all federations in the
system as a whole as not all combinations make sense. Consider the example of a two-node
system. The following configurations semantically make no sense:

  • Configuring REPLACE as policy on both sides.


  • Combining REPLACE as policy on one side and MERGE on the other side.


  • Combining REPLACE as policy on one side and DELETE on the other side.


  • Combining DELETE as policy on one side and MERGE on the other side.


The wait_for_historical_data() call does not block while performing a merge due to
the configured merge-policy. This means it is currently not possible to block an application
until the merge has completed.

OSPL-4891
RMI Java/C++ incompatibility
RMI Java and RMI C++ will not communication with each other due to
internal topic names mismatch.

OSPL-5885
DDSI message verification
Verification of incoming messages is quite strict, but enum values
embedded in the data are not checked against the set of valid values.

OSPL-6080
Crash duration termination by a signal
There is a small chance that a process crashes when it is
terminated by a signal. In this situation the termination sequence
will first disable the API so that the application will no longer be
able to access entities and then free all resources. The problem is
that the termination sequence should wait until all ongoing
operations, which started accessing entities before the API was
disabled, have finished before freeing resources otherwise they may
access freed memory and cause a crash. This problem can only occur
when the termination sequence starts during entity create and enable
operations.

OSPL-6152
Tuner doesn't accept the name of the domain anymore (for connecting)
Other than in 6.4, the current 6.5 version doesn't allow the name
of the domain to be specified anymore as a means to connect to that domain,
i.e. using "ospl_shmem_ddsi_statistics_rnr" as the URI rather than
the (working) integer value (0) or the xml-config-file URI.

OSPL-6233
DCPS API unregister_instance with timestamp before most recent register fails
A call to unregister_instance (on an existing instance) with a
timestamp prior to the most recent registration for that writer is
handled incorrectly: the group instance correctly detects the
unregister_instance operation should be ignored, but the writer
nonetheless incorrectly removes the instance from its own adminstration.

OSPL-6901
Behavior of built-in topic part of ISOCPP2 DCPS API not working with GCC <= 4.3
Builds that have been generated with GCC version <= 4.3, will
not be able to use the built-in topic part of the ISOCPP2 DCPS API
due to issues with dynamic casting in the combination of the API
and the compiler.

OSPL-6974
Group coherency during durability alignment
When in a running system an end of a group coherent update takes
place at the same time a late joining Node is aligning historical
data there is a chance that a group coherent update is partially
lost by the late joining node.


The problem is caused by the sending node that not yet treats a
group coherent update as an atomic change, in this situation the
sending node will partially aligns data as a coherent update and
partially aligns data as completed, the receiving node will not be
able to detect completeness of the whole and eventually discard the
part of the data that was send as a coherent update.

OSPL-7244
Tuner does not support writing samples containing multi-dimensional collections
Currently, the Tuner tool does not support editing multi-dimensional
sequences (in IDL, sequence<sequence>, or
some_type[x][y]). When editing such data fields in the Tuner writer table,
the fields will be uneditable. This also affects editing Google Protocol
Buffer samples that contain a field defined as repeated bytes, as that is
represented as a sequence of a sequence of octets in the data model.

OSPL-7299
DDSI2(E) and RTNetworking ssl/crypto versions are incompatibile/not-available on some linux platforms
DDSI2(E) and RTNetworking services may report "error while
loading shared libraries: libssl.so.10: cannot open shared object
file: No such file or directory" on some platforms even when
ssl is installed properly due to a difference in ssl setups between
linux distributions.


A workaround for this is creating symbolic links in /lib for
libssl.so.10 and libcrypto.so.10 that point to your libssl and
libcrypto libraries on your system.



OSPL-7382
Incorrect instance state after group coherence update and deletion of writer
If a group scope coherent writer publishes a coherent set and then is
deleted while a group scope coherent reader has locked its state by
having called the begin_access operation it is expected that the
coherent set will become available as soon as the end_access operation
is called by the reader and that the state of the reader's instances
has become NO_WRITERS (assuming that the writer was the only writer).
In the current implementation it is possible that the update to the
NO_WRITERS state is not detected and that the instance state of the
reader's instances remain alive.
libcrypto libraries on your system.

OSPL-7387
Coherent updates that hit the DataReader's max_instances or
max_samples_per_instance resource limits are always aborted and can
leak memory.

When a coherent update hits a Reader's max_instances or
max_sample_per_instance limit it will be aborted. Aborting a
transaction when running into these resource limits is only required
when the Reader cannot solve this resource by taking action i.e.
when releasing resources solely depends on arrival of data, however
analysis of the actual situation and determining what to do is, in
the current implementation, too costly so for now we always abort.
Aborting can also leak some resources within the transaction
administration because some dependencies may still be unknown at
abortion, which is considered acceptable for the time being and for
as long hitting resource limits only occurs occasionally.


Advised is to avoid hitting these resource limits either by not
setting them or to assure normal operation never reaches these
limits

OSPL-7707
Coherent updates not working properly i.c.w. QoS changes.
When modifying the QoS-ses of existing publishers, writers,
subscribers or readers that are involved in an ongoing coherent
update, readers may not receive this coherent update in all cases.
Users are advised to refrain from changing the QoS-ses of existing
entities. If different QoS-ses are required, the involved entity
should be deleted and re-created instead.

OSPL-8008
C++ global const struct copy failure on g++ 4.1.1
When using DDS C++ pre-defined const structures (like
DDS::DURATION_INFINITE, DDS::TIMESTAMP_CURRENT, etc) with g++ 4.1.1,
assignments can fail. It will fail only when assigning it to a global
const variable, which is then assigned to a variable within a function.

DDS::Duration_t globalStruct = DDS::DURATION_INFINITE;
const DDS::Duration_t globalStructC = DDS::DURATION_INFINITE;
void main(void)
{
const DDS::Duration_t localStructC = DDS::DURATION_INFINITE;
DDS::Duration_t copy_localStructC = localStructC;
DDS::Duration_t copy_globalStructC = globalStructC;
DDS::Duration_t copy_globalStruct = globalStruct;
}

The content of copy_globalStructC does not equal
DDS::DURATION_INFINITE but is zero. The other structures
don't have this problem.


Easiest workaround is by initializing globalStructC
differently:

const DDS::Duration_t globalStructC = {DURATION_INFINITE_SEC, DURATION_INFINITE_NSEC};


OSPL-8465
GPB: embedding inner messages with keys multiple times fails.
If you have keys in an embedded message and then try to use that
message more than once in your top-level message, only the first
occurrence will become part of the key.


As a workaround, duplicate the embedded message, rename it and
use that one as type for the second attribute in the top-level
message.

OSPL-8507
Transactions with explicit registration messages never become complete.
When explicitly registering one or more instances during a
coherent update by calling the register_instance() or
register_instance_w_timestamp() operations on one or more
datawriters, the coherent update will never become complete
for datareaders that do not participate in the same federation.
The workaround for this is to never explicitly register instances
in a coherent update from within application code, but instead rely
on implicit registration of instances by the middleware.

OSPL-8665
DDS wait_for_historical_data_w_condition operations cannot deal
with partial coherent sets

The wait_for_historical_data_w_condition() operation cannot deal
with alignment of partial coherent sets, which basically means that
the operation cannot be used in environments where coherent
updates are used. Solving this issue requires an alternative
implementation of the condition because the current implementation
creates a query by splitting the condition into a list of instance
(key) queries and sample (non-key) queries but transactions have no
knowledge of instances and samples, for these the query can only
rely on the messages so a different kind of implementation is
required. A workaround is to stick to wait_for_historical_data()
and have the application filter out the desired data.

OSPL-8673
Possible release claim and memory leak when receiving transaction
twice

When a federation starts and receives part of a transaction via
the normal path and durability aligns the complete transaction
afterwards, the part of the transaction that is received twice
may leak memory (as long as the reader/group lives) and also 'leaks'
the resource claims the samples have done, potentially causing
delivery of new samples to be denied because of ResourceLimits
that have been set (only if they are used). If a second transaction
is completely received after first transaction then this is not a
problem.

OSPL-8768
RMI Java multithread service priority not working on PERC
When using multi-thread server policy for RMI Java on PERC, then the
requests are not handled in the expected order. For instance, when you have
different priorities, then the higher priority requests should be handled
before the lower priority ones. This doesn't happen.

The problem is that, for request handling order, RMI Java uses the native
java.util.concurrent.PriorityBlockingQueue. This does not work properly on
PERC.

OSPL-9113
Non-coherent reader may not be aligned with data from an unfinished transaction
Under certain circumstances a non-coherent reader may not receive historical
data from an unfinished transaction. When a non-coherent reader is created and
there is a unfinished transaction present for which no writer exists anymore
(thus the transaction will never become complete) then this non-coherent reader
may not receive the historical data from this transaction.

OSPL-9595
Potential deadlock when deleting publisher/subscriber with open
begin_access call

When after the begin_access and before end_access a publisher or
subscriber is deleted from the common factory participant a deadlock
can occur when the participant receives an asynchronous internal event.


Advised is to not delete entities from the participant while access
is started

OSPL-9358
Mastership handover not supported when configuring different master priorities.

With the master_priority setting it is possible for a late joining
fellow with a higher priority to become master. To recognise this
situation the original node should give up its mastership as soon as
it hears about the presence of a node with a higher priority. The
conditions to recognise such a situation are currently not
implemented yet

OSPL-9612
Incomplete solution for OSPL-9433
The solution implemented for OSPL-9433 is not complete. There is
no support for using RTNetworking with more than 1 channel with that
solution. There is furthermore a small window when a node
(re)connects at the time when durability finishes merging. In that
case it is possible that the purge-suppression isn't effectuated
properly.


Advised is to not delete entities from the participant while access
is started

OSPL-9882
Linux: MATLAB/Simulink hangs when connecting to shared memory domain
On Linux, a MATLAB script or Simulink model connecting to a Vortex OpenSplice domain via shared memory will hang.

Resolution:MATLAB, like Java applications requires that the environment variable LD_PRELOAD be set to reference the active Java installations libjsig.so library. The MATLAB user interface uses Java, and thus requires the same signal handling strategy as Java applications connecting to Vortex OpenSplice. The precise syntax for setting the LD_PRELOAD environment variable will depend on the shell being used. For Oracle JVMs, LD_PRELOAD should contain this value:


$JAVA_HOME/jre/lib/amd64/libjsig.so

OSPL-9919
No QoS XML file validation in MATLAB integration
Currently users are allowed to specify an invalid QoS file for a block type. The user can set the QoS block
parameter by selecting QoS XML files for all DDS block types. There is no validation to ensure that there
is a correct entry for the DDS block type. For example, on a DataReader block, it is possible to select a QoS
profile with no policies for datareader_qos. There is no warning on the setting of invalid QoS block parameters,
but it can fail at run time. Verify manually the QoS XML file contains the correct QoS policies for the block type.

OSPL-10006
MATLAB aborts on Exit after connecting to DDS.

On exit, MATLAB will report a segmentation violation if all of the following conditions at satisfied:
  • MATLAB is run from a Linux system,

  • the OSPL_URI environment variable refers to a Single Process configuration of Open Splice,

  • the MATLAB instance has connected to OpenSplice via the Vortex DDS Block Set for Simulink.

  • The segmentation fault is reported because Open Splice installs exit handler that gets unloaded from memory before it is called. On Linux systems, the exit handler does nothing. Other MATLAB threads appear to continue to shutdown normally.


Impact:
  • here is no impact on system integrity or MATLAB

  • each MATLAB exit can produce a file in the user's home directory starting with 'matlab_crash_dump'. These files can safely be deleted to reclaim disk space.


OSPL-10018
MATLAB: Shared Memory Database Address on Windows needs to be changed from default

On a Windows 64-bit system, an OpenSplice system configured with Shared Memory, MATLAB cannot connect to the OpenSplice domain if the Shared Memory Database Address is set to its default value of 0x40000000. The error log (ospl-error.log) will show entries such as:



Report : Can not Map View Of file: Attempt to access invalid address.

Internals : OS Abstraction/code/os_sharedmem.c/1764/0/1487951812.565129500



As workaround use the configuration editor to change the default data base address. Use the 'Domain' tab, and select the 'Database' element in the tree. If necessary, right click the Database element to add an 'Address' element. Change the address. In general, a larger number is less likely to be problematic. On a test machine, appending two zeros to the default address allowed for successful connections.

OSPL-10075
Tuner pre 6.6.0 is not forward compatible with cmsoap of 6.6.0 and newer

When trying to connect a Tuner with version =6.6.0, the connection will fail. The cmsoap service will
trace errors like Creation of qos failed and Unexpected
opening tag
.

OSPL-10242
DDSI2 support for Cloud/Fog fault-tolerance

Cloud/Fog only forward discovery information for (proxy) endpoints,
but DDSI2 internally requires the existence of (proxy) participants
as owners of these endpoints. Because of this, DDSI2 infers the
existence of such participants, creating them just-in-time when the
first endpoint is discovered, and deleting them automatically when
the last endpoint of that participant is deleted.
When a Cloud/Fog node disappears, these proxy participants become
detached and start a lease of Discovery/DSGracePeriod. If another
Cloud/Fog node takes over before the lease ends, the lease is reset
and the proxy participant attached to the new Cloud/Fog node;
conversely, if the lease ends before this happens, the proxy
participant is deleted along with its endpoints.
In the particular case where the Cloud/Fog node taking over has no
knowledge of some of these endpoints, e.g., because an endpoints was
deleted by its application while the entire cluster of Cloud/Fog
nodes was being restarted, the proxy participant will be attached to
the new Cloud/Fog node, but there will never be a notification that
the proxy endpoint corresponding to the deleted endpoint should also
be deleted. Thus, it is strongly advised to either: (1) have a true
fault-tolerant setup of Cloud/Fog nodes; or (2) set DSGracePeriod to
0 and never restart a Cloud/Fog cluster within the lease duration of
the services.

OSPL-10396
IDL containing nested typedef'd sequence is not handled for IDLPP
language targets c and c99.

Having a sequence parameterized by a typedef'd sequence is not supported.
In IDL this would look like

typedef sequence walk;
struct Adventurer {
long id;
walk favourite_walk;
sequence recent_walks;
};


IDLPP would output some error "idl_seqLoopCopy: Unexpected type".
A workaround would be to fully expand out nested sequence definitions and typedef them like,

typedef sequence walk;
typedef sequence<sequence, 50> walk_seq;
struct Adventurer {
long id;
walk favourite_walk;
walk_seq recent_walks;
};


OSPL-10408
Tuner support for group coherence - writer cannot be created with
QoS history KEEPLAST

If the user attempts to create a writer for a publisher with group
coherence settings, the writer creation will fail, if the history QoS
for the writer is KEEPLAST. The error message does not give a clear
indication as to why the writer create is failing. The user must
manually set the WriterQoS History policy to KEEP_ALL.

OSPL-10418
Simulink Coder - MS Visual Studio Solution file builds fail with
'cannot open file dcpsc99.obj'

Simulink Coder builds of models that use the "grt.tlc - Create Visual
C/C++ Solution File for Simulink Coder" target will fail on build with
the following error:


error LNK1104: cannot open file 'dcpsc99.obj'


A workaround is to use a "grt.tlc - Generic Real-Time Target" instead.

TSTTOOL-266
Tester cannot edit recursive message fields in protocol buffer samples.
When editing a sample belonging to a topic type defined by protocol
buffer, if the defining message type contains a field that is the same
message type as its parent, the fields in question don't appear in the
data model. This renders them non-editable in the Sample Edit window,
as well as when specifying their field names in scripting commands.

TSTTOOL-181
Scripting grammar does not allow multidimentional collections as FieldNames.
When accessing user data fields in a sample read in via a script, the scripting
language does not allow more than one collection index in the field name, as defined
by this rule (taken from the scripting BNF found in the Tester user guide, appendix A):

FieldName ::= ( "[" "]" )? ( ( "." FieldName ) )?


The grammar specifies that the collection index can be present 0 or 1 time. So if one
tries to create a parameter to a send or check instruction for a field called
"array2D[0][0]", script compilation fails.

TSTTOOL-186
Tester erroneously adds in an extra collection index to UserData, when populating the
edit sample table with existing sample data containing sequences containing arrays.

In the case where there is a parent sequence containing an array, or containing a
struct that also contains an array further down the chain, the internal data model thinks
there is valid data in the parent sequence index one greater that there actually is.
For example, in a live sample in the system there contains the fields:

sequence[0].array[0] = 0
sequence[0].array[1] = 1
sequence[1].array[0] = 10
sequence[1].array[1] = 11

But, when selecting that sample to edit it, the sample userdata is populated with the
following fields defined:

sequence[0].array[0] = 0
sequence[0].array[1] = 1
sequence[1].array[0] = 10
sequence[1].array[1] = 11
sequence[2].array[0] = 0
sequence[2].array[1] = 0

The parent sequence has the next sequence member already defined when it shouldn't have.

TSTTOOL-187
Confusing/erroneous table readouts of multidimensional collection user data in the
sample editor.

When using the sample edit window for editing multidimensional collections (eg. an
integer matrix), the sample edit model does not properly account for the proper ordering
of the collection indices, when assigning the field values to the field names. For
example, a 2x3 matrix defined like this in application code:

int[][] array2D = new int [2][3];
array2D[0][0] = 0;
array2D[0][1] = 1;
array2D[0][2] = 2;
array2D[1][0] = 3;
array2D[1][1] = 4;
array2D[1][2] = 5;

would look like this in the Tester sample edit window table:

array2D[0][0]: 0
array2D[0][1]: 3
array2D[0][2]:
array2D[1][0]: 1
array2D[1][1]: 4
array2D[1][2]:


TSTTOOL-341
Mismatch in handling of bounded character sequences between script send and script
check.

In a scenario script, given a topic that has an bounded sequence of characters
in its type "b_seq", the following script code would fail the check:

send test_SimpleCollectionTopic (
b_seq => abc
);

check test_SimpleCollectionTopic (
b_seq=> abc
);

Passing in non-indexed parameters (ie. treating the character sequence as a regular
string in scirpt) for bounded character sequences is accepted for send, but not for check.

TSTTOOL-343
Tester's statistics browser can't display statistics information.
Navigating to the statistics tab and attempting to view statistics information for DDS
entities does not currently work. The Tester log file reports that the entities are not available.

TSTTOOL-355
Tester scenario scripts continue running if Tester is
disconnected from OpenSplice.

If Tester is disconnected from a domain while a scenario script
is running, the scenario is not interrupted and continues to its
natural completion. If the script contains commands to read/write
samples, the result is the script will report a fail, and will spam
NullPointerExceptions reports to the OSPLTEST log for each attempt
to communicate with OSPL until the script runs its course.

Workaround: After domain disconnection, and the script
execution ends (either by ending naturally or by explicit stoppage
by user), the script can be run again after domain re-connection
with success. Be aware though that previously written transient or
persistent samples will be read in again on reconnect and be input
into the sample table again. If it is needed, the sample table can
be cleared before executing the scenario again.

Related

OpenSpliceknown issues
twitter logo linkedin logo
News and Events
Copyright © 2022 ZettaScale Technology Ltd. All Rights Reserved