Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from JANES International Defense Review from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands (16,000+) of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed. It functions as a data hub for the BEAST system, (Battle Engagement Area Simulation Tracker)...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
RE: JANES
If you made it this far, here's a modern case study and video from JANES
Can you give us any examples of Thales defence clouds currently in operation? Nexium Defense Cloud (interesting name...fkers), our modular infrastructure solution that is adapted to the security and resilience needs of the defence sector, is now being integrated into accredited systems at ‘secret’ levels and beyond in France and within NATO.
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from JANES International Defense Review from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands (16,000+) of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed. It functions as a data hub for the BEAST system, (Battle Engagement Area Simulation Tracker)...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
RE: JANES
If you made it this far, here's a modern case study and video from JANES
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from JANES International Defense Review from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed. It functions as a data hub for the BEAST system, (Battle Engagement Area Simulation Tracker)...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
RE: JANES
If you made it this far, here's a modern case study and video from JANES
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from JANES International Defense Review from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
RE: JANES
If you made it this far, here's a modern case study and video from JANES
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from JANES International Defense Review from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
RE: JANES
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from Jane's International Defense Review from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf
DECLAS 👇🏻👇🏻👇🏻👇🏻👇🏻
You may enjoy this little tidbit from [Jane's ] International Defense Review (https://youtu.be/_tel657GtXA) from 1991, a defense contractor trade magazine - unveiling the hardware side to STORM - an "invisible" UFO, consisting of thousands of sensors (not just cameras) and LED/LCD "skin" that projects the background onto the opposite sides of the craft...all the while using a 1.6ghz signal that's been present any time a UAP has been observed...1991 people (!!!) I'm sure this tech is WAY more advanced now... and STORM is most likely receiving data from this type of hardware in cases where an actual conflict is happening. I don't think that STORM is merely for "theoretical" gaming out of possible scenarios alone...
Consequently, STORM routinely creates gigabytes of output from a single replication. When multiple replications are required, even more data is generated. Moreover, this output is typically not in a form that can immediately be used for analysis. Thus, some type of post-processing, e.g., filtering and transformation, is required to produce a reduced set of data that is suitable for subsequent analysis.
Sometimes the most difficult aspect of gaining insights from a high-dimensional set of output is ‘putting it all together’ to form a coherent narrative that describes:
(1) which major entities and platforms initiated key actions,
(2) what happened or failed to happen,
(3) when and where key combat events occurred, and, probably the most difficult to ascertain,
(4) why major events or outcomes occurred or didn’t occur.
Of the gigabytes of output that STORM produces, a ‘feature extraction’ process is first needed to determine the functions of the data that are most relevant and meaningful to the campaign analyst. Once key data is acquired from the raw data stream, we experiment with new methods for visualization and analysis of the simulation output data. This process supports the verification and validation process in that it can identify both ‘bugs’ in simulation code as well as unintended defects in the many combat plans that must be created by analysts as part of the scenario development process.
RE: https://informs-sim.org/wsc14papers/includes/files/440.pdf