Professional Documents
Culture Documents
SPIEDigitalLibrary.org/conference-proceedings-of-spie
ABSTRACT
This paper will present the results of an investigation into requirements for existing software and hardware solutions for open
digital communication architectures that support weapon subsystem integration. The underlying requirements of such a
communication architecture would be to achieve the lowest latency possible at a reasonable cost point with respect to the mission
objective of the weapon. The determination of the latency requirements of the open architecture software and hardware were
derived through the use of control system and stability margins analyses. Studies were performed on the throughput and latency
of different existing communication transport methods. The two architectures that were tested in this study include Data
Distribution Service (DDS) and Modular Open Network Architecture (MONARCH). This paper defines what levels of latency
can be achieved with current technology and how this capability may translate to future weapons. The requirements moving
forward within communications solutions are discussed.
Keywords: Open Architecture, Systems Analysis, Stability, Latency Analysis
INTRODUCTION
The techniques documented herein outline the methods implemented in order to test the throughput and latency of multiple
existing open architecture communication software. In order to assess the latency imposed on the communication path due to the
architecture, a study was performed using different packet sizes transferred at different throughput. The measured throughput
was then calculated in order to determine the overall system throughput and latency measurements of the architecture under test.
The latency effects on the stability of a weapon can be analyzed through the use of control system and stability analyses.
Additionally, these methods can be applied to a myriad of subsystems of any weapon. Prior to the study performed on these two
communication architectures, an investigation into the tolerable delays within a missile simulation was performed in order to set
a baseline requirement for system communication latency. This was accomplished through the use of frequency and time domain
methods which were used to perform stability analyses for a 3-DOF missile simulation with communication latency injected
artificially between missile subsystems. Figure 1 depicts the overall simulation and closed loop control system scheme
implemented in the simulation used for analysis. After implementing the control system methods described, Table 1 provides a
reference to the preliminary requirements derived from these analyses for a produce-and-consume plug-and-play communication
architecture for an Air-to-Ground type weapon. These efforts will evolve into the capability of testing potential future
communication architectures in conjunction with performing stability analyses on systems in a hardware in the loop (HWIL)
configurations utilizing a communication architecture such as Data Distribution Service (DDS) or Modular Open Network
Architecture (MONARCH).
Figure 1: Block diagram of closed loop simulation with communication architecture latencies
Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2017, edited by Raja Suresh,
Proc. of SPIE Vol. 10205, 1020503 · © 2017 SPIE · CCC code: 0277-786X/17/$18 · doi: 10.1117/12.2265842
The requirements for architectural latency were determined through the use of both time domain and frequency domain analyses.
In order to understand these requirements, it is necessary to describe the setup and a few of the defining variables behind the
experiment. Initial assessments were performed for weapons intended for ground engagements. The airframes were linearly
modeled using techniques from Missile Configuration Design by S.S. Chin[3]. The selection of airframes under study had
aerodynamic responsiveness ranges indicative of a variety of air-to-ground weapons. The study concentrated on a 3DOF
elevation channel as this is where the primary maneuvers in endgame of the engagement occur with respect to ground targets. A
variety of seekers, actuators, and other weapon components were modeled and used within the analyses to represent state-of-the-
art performance characteristics. The control loops where configured as shown in Figure 1 for analyses and the latency for each
node were derived and are listed in Table 1.
Latency Tests
In order to measure the latency and throughput of the architectures, 120 tests were performed in which different message sizes
were sent at different attempted rates in order to load the architecture. The following rates and message sizes were used in the
study:
Payload size (Bytes): 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, and 63,000
Message Rate: 250, 500, 750, 1000, 1500, 2000, 2500, 3000, 4000, 5000, 7500, and 10,000
The latency was then computed by measuring the total time between the attached send time and capturing the receive time.
Therefore, the latency measurements include the time it takes to both create and send the message from the test machine, the
time of transmission to the receiving machine, and the time to process the message on the receiving end.
DDS’s measured throughput was equal to the attempted throughput nearly 100% of the time, whereas MONARCH on average
had a measured throughput equal to the attempted throughput only 75% of the time. This is mainly due to undelivered messages
when sending messages at 300 Mbps and above. Although DDS requires the user to configure the throughput rate using the
nanosleep function, the testing team was able to reach a near 1:1 ratio of attempted throughput and measured throughput.
For DDS, message loss was very uncommon as the testing team only made note of approximately 3.3% of tests losing messages
during the experiments. When messages were lost, they were at most 0.07% of the messages sent during the test phase. This will
be compared to the results for MONARCH in a later section.
DDS delivers messages with latencies less than 10 milliseconds in every test case, as is shown in Figure 7. Just like MONARCH,
there is a clear correlation between message size/throughput and the latency. DDS’s latency is also quite close to the transmission
latency, yet not completely. With respect to weaponry and the use of open architectures, this bodes well for DDS as a potential
solution. Over the 120 test cases, DDS demonstrated a lower latency time 97% of the time. DDS also was superior in terms of
not losing samples over the entire test, DDS delivered 99.992% of its samples, with all of its messages being delivered in under
10 milliseconds.
To find how consistent the frameworks’ latencies were, the log outputs of the 1000 messages/sec 1024 byte test were analyzed.
This test was selected since each framework had very accurate send rate for the case, no samples were lost. DDS demonstrates a
fairly consistent latency, with very few samples appearing outside the 80-120 us range. It was found that for that test, the average
latency for DDS was 105.2 microseconds with a standard deviation of 6.47 microseconds.
The throughput of a framework is limited by two factors: available bandwidth and message rate. Because we knew the bandwidth
was limited to 1 Gbps, the other limiting factor was tested which was message rate. This test measured exactly how long each
framework took to create a message at varying sizes and send it, which we refer to as the publish time. The publish time was
used to compute the maximum theoretical send rate and the maximum theoretical throughput the publisher could send data at the
specified byte size. The publish time for DDS was observed to be roughly 50 microseconds until the message size reached 10,000
bytes. From this point it was observed that the publish time increased.
As expected, Figure 7 shows that as the message size increases, the maximum send rate decreases. Ultimately this will dictate
the amount of data that is pushed across each architecture.
MONARCH’s apparent throughput limit of 300 Mbps is apparent in this test. Other aspects of the framework were detrimentally
effected when we pushed it to a throughput greater than ~250 Mbps, such as large latencies, dropped messages, messages
delivered out of order, etc. This is demonstrated in Figure 9, which shows the percent of samples considered invalid due to not
being delivered or arriving with latency greater than 10 milliseconds compared to the send time. We chose to regard messages
delivered with latency greater than 10ms as lost because some GBU-X subsystems need message latency to be at the microsecond
level (or less), so messages delivered with 10ms latency are useless to the subscriber.
Just like message delivery, message latency with MONARCH is affected by the throughput limit. When the throughput limit was
reached (300 Mbps), a large spike in average latency was observed. Figure 10 shows the relationship between the payload size
and throughput to the latency. It is worth noting that the 4000 byte message was the first message size to achieve a throughput
greater than 250 Mbps. Another important aspect of the graph is that the highest latency data point typically relates to a higher
message rate, as that correlates to an increased throughput which also has an effect on the latency. Included in the graph is also
the transmission latency time, which is how long a message of the specified byte size takes to travel from one machine to the
other at 1 Gbps. The latency being measured includes the time it takes to create and send the message off the computer, the
transmission latency, and the time it takes for the subscriber to process the incoming message.
When the throughput was kept below 250 Mbps, MONARCH averaged latencies 3.5 times longer than the equivalent DDS
latencies. Above the apparent threshold of 350 Mbps for MONARCH, the performance showed a clear degradation with respect
to latency which reached well over 1 second on some messages over 1000 bytes.
This test was selected since each framework had very accurate send rate for the case, no samples were lost, and it was within
MONARCH’s throughput range. It was found that for that test, the average latency for MONARCH was 211.5 microseconds
with a standard deviation of 19.2 microseconds. This is well within the threshold for the architectural requirements as defined in
Table 1.
Figure 12: Analysis of time to publish the message and the size of the message for MONARCH
MONARCH has a publish time half that of DDS for messages less than 750 bytes. However, when the message size is greater
than 1000 bytes MONARCH’s publish time exceeds DDS’s.
As expected, Figure 13 shows that as the message size increases, the maximum send rate decreases. Ultimately this will dictate
the amount of data that is pushed across each architecture. This decrease in send rate is much steeper than the send rate for DDS
based upon message size from Figure 7.
[1] Agassounon, William, MONARCH vs. DDS Comparative Performance Evaluation, Textron Systems, Massachusetts,
(2016)
[2] Benedick, Fred, Plug and Play for Architecture for Modular Weapons, WINTEC, Inc., Florida, (2015)
[3] Chin, S.S., Missile Configuration Design, University Microfilms, Xerox Co., Michigan, (1971)
[4] Richard C. Dorf, Robert H. Bishop, Modern Control Systems, Pearson Education, Inc., New Jersey (2011)