2007 Paper No. 7399
Florida Institute of Technology
SIMnetrix Solutions, LLC.
Suicide bombers have become increasingly deadly and there is an urgent need for the development of innovative methods to prevent or mitigate the casualties and aftermaths of such a catastrophic event. Performing simulations with variant crowd formations and densities is one approach to better understanding the effects of such an attack. This paper explores and estimates the effects of suicide bombers across multiple crowd formations ad their respective densities through a virtual simulation. The ultimate goal of our empirical analysis was to determine the optimal crowd formation as it related to a reduction in the deaths and/or injuries of individuals in the crowd. The modeled crowd formations were based on real-world environments and consisted of a cafeteria, concert hall, mosque, street, hotel, bus, airport, and University campus. Specific simulation inputs are the number of individuals in the vicinity, walking speed of attacker, time associated with the trigger, setting (crowd formation), and the total weight of TNT. Results indicated that the worst crowd formation is a circular one (e.g. concerts), with a 51% death rate, 42% injury rate, thus reaching a 93% effectiveness measure. Vertical rows (e.g. mosques) were found to be the best crowd formation for reducing the effectiveness of an attack, with a 20% death rate, 43% injury rate, reaching a 63% effectiveness measure. Line-of-sight with the attacker, rushing towards the exit, and stampede were found to be the most lethal choices both during the attack and post-explosion. These findings, although preliminary, may have implications for emergency response and counter terrorism. There are number of physical and social variables we plan on integrating into this simulation in the future. These include modeling physical objects (e.g., landscape, furniture, etc.) and psychological variables (e.g., crowd behaviors). There are numerous applications for this simulation, ranging from special event planning to emergency response.
2007 Paper No. 7229
Texas A&M Engineering Program
College Station, TX
With the introduction of the Contemporary Operating Environment (COE) Opposing Force (OPFOR), as reflected in Iraq, Afghanistan and elsewhere, into the OneSAF system a different and lethal set of tactics, forces and equipment has been developed to represent the current ground truth faced by the Armed Forces of the United States and its allies. The COE OPFOR comprises the collective set of organizations (combatant, noncombatant, corporate, non-government, government and international) existing in and acting on the environment in the Blue Force (BLUFOR) area of operations as representative of current military operations. They can be categorized as conventional forces (Regular Armed Forces) or irregular forces (Paramilitary, Guerrilla, Terrorist, Militia, and Combatant and Non-combatant Civilians on the Battlefield). A critical component for the accurate portrayal of these organizations in the OneSAF is the representation of the command and control means by which the components of the COE OPFOR will synchronize and direct their activities. The COE OPFOR will use components of the Civilian Information Infrastructure (CII) as a principal or alternate Battle Command System and Information Operations mechanism. These CII means are collectively termed Alternative Communications Means (ACM) as they represent a departure from the use of combat net radios for battle command system use. Irregular COE OPFOR forces will use ACM as both their primary battle command system and information operations mechanism. Conventional COE OPFOR forces will use ACM; as a parallel battle command system and as the primary information operations mechanism since they anticipate their tactical communications will be disrupted or destroyed over time and know BLUFOR is reluctant to disrupt the CII. This paper describes the identification and decomposition of these ACM, the description of their performance, how they can be used by the COE OPFOR and how they can be integrated into the OneSAF, and other simulations.
2007 Paper No. 7333
Dept. of Systems Engineering United States Military Academy
West Point, NY
Paul W. Richmond, Ph.D., P.E.
U.S. Army Engineer Research and Development Center
Insurgents have effectively employed asymmetric tactics, such as suicide vehicle born improvised explosive devices (SVBIEDs), against counterinsurgent (COIN) forces conducting Stability, Security, Transition, and Reconstruction (SSTR) Operations. The political, cultural, and physical settings in which they implement these tactics are not as readily constrainable as it is in full combat operations. These factors, overlaid on an urban backdrop, add to the complexity and challenges of detecting and defeating this threat. This paper discusses our current set of experiments, results, and insights gained regarding effects of traffic control point (TCP) strategies on SVBIED mission outcome. Agent based modeling and simulation environments were used in this work for exploratory modeling across a wide range of parameters. The intent is to apply these insights in the future to develop focused experiments in more physics-based, traditional simulation environments for a tiered analysis capability. The current research extends our previous work by incorporating denser and more complex urban settings, traffic, multiple targets, and area coverage strategies that can affect SVBIED behavior based on awareness of TCPs. Our goal is ultimately to generate insights that will assist counterinsurgent forces in developing strategies that are robust against a range of SVBIED behaviors.
2007 Paper No. 7046
US Army Research Development and Engineering Command, Simulation and Training Technology Center
Embedded training is a key requirement for many future and current force systems, making it a very important capability for Army transformation. Despite its importance, few demonstrations or tests have been conducted on which to base embedded training systems implementation. For the past five years, the US Army Research Development and Engineering Command (RDECOM) Simulation and Training Technology Center (STTC) has researched embedded training solutions applicable to individual Soldiers and small teams. To assess the utility of these solutions under field operating conditions STTC sought and found a meaningful culminating event in the Army’s premier live discovery experiment, the Air Assault Expeditionary Force (AAEF) experiment. Three dismounted embedded training prototypes were selected for use in AAEF. The first was an immersive, virtual, untethered, Soldier-worn system, interoperable with other Army simulation systems. The second system was a tablet computer-based system that provided leader mission planning and walkthrough. Both these systems displayed a high fidelity virtual terrain database of the McKenna training area at Ft. Benning where most of the AAEF experiment was conducted. The third application was a first-person shooter game engine modified to operate on the Soldier-worn prototype and supporting workstations.
During the experiment the Soldiers used these systems for mission planning, mission rehearsal and after action review of the rehearsal before carrying out live AAEF missions. Generally, the Soldiers’ reactions were positive toward the systems and the systems were seen to have potential for future development. The resultant feedback from this experiment can direct Army research and implementation of embedded training This paper will discuss AAEF, the embedded training systems used there and the manner in which these systems were used. It will provide anecdotal and questionnaire-based Soldier feedback of their impressions of the training technologies…
2007 Paper No. 7320
Continuum Dynamics, Inc.
CAE USA, Inc.
Modeling and simulation developments have resulted in high fidelity pilot-in-the-loop flight simulators providing realistic training environments. Modeling challenges continue to exist, in particular for accurate simulation of the near-ship environment critical to landing a helicopter onto the flight deck of a moving ship with various wind conditions. Providing an effective simulated environment requires modeling of the highly unsteady airwake resulting from bluff-body aerodynamic interactions of the ship superstructure and hangar near the flight deck and in close proximity to the ship as it passes through the airstream. This paper describes the development of a U.S. Navy rotary wing flight simulation with turbulence effects including high-fidelity representation of the ship airwake environment. The spatially-varying and time-varying flow field around the ship is determined off-line using a hybrid, inviscid CFD methodology that is well-suited for representing the turbulent environment several ship lengths downwind from the flight deck with moderate computational requirements. Results from this off-line analysis are formulated into a ship airwake database for multiple landing platforms and wind-over-deck conditions suitable for real-time pilot-in-the-loop virtual simulation. The paper describes the development of the simulation flight dynamics model, development and validation of the CFD-based ship airwake flow fields, and integration of the ship airwake database within the aerodynamic model. Implementation issues associated with integrating the ship airwake database into the flight dynamics model associated with real-time implementation and memory management are identified, and the approach to overcome these issues are described.
2007 Paper No. 7352
Institute for Defense Analyses
Have you ever given a tank entity the command to follow a road and then thought you were simulating a “Dancing With The Stars” episode? Have you ever asked an Internet utility to provide a travel route and then found the result unintuitive and longer than expected? In each case, problems in the digital representation of the road networks can be to blame. The tank entity might actually be following a road that includes severe kinks and kickbacks. The route planner might be defeated by breaks in the road network. Much of the digital data used to create simulation representations of the physical environment comes from the National Geospatial-Intelligence Agency (NGA). While the NGA has a large holding of internally-produced geospatial data, the agency’s current strategy includes substantial data production under contract as well as a large cooperative effort with other nations under the Multinational Geospatial Co-production Program (MGCP). The development, codification, and enforcement of detailed quality standards has emerged as key to this acquisition strategy. The MGCP countries have jointly produced de-tailed requirements for the relationships between and quality characteristics of feature data elements; however, these specifications have been produced for human consumption. In some cases, the documentation lacks the specificity necessary to support algorithm development to enforce the standards. This paper describes the type of quality standards that are to be applied in the future production of geo-spatial feature data and illustrates a process to transform semantic descriptions into specific guidance suitable for software implementation. The process includes experimentation to determine appropriate geometric reasoning strategies that will permit identification of substandard data while minimizing false positive notifications. The paper describes a typical problem, the experiment designed to address the problem, and the results of conducting the experiment. The paper concludes with observations on the potential impact of these geospatial data…
2007 Paper No. 7314
University of Central Florida
In many simulation systems, dead reckoning is used to minimize network bandwidth utilization. The Distributed Interactive Simulation (DIS) standard is one example protocol that uses dead reckoning. Many game engines also use the technique. Until a few years ago graphics hardware used a fixed pipeline. In recent years PC video cards have been built with a programmable architecture. Collectively, the programmable pipeline is referred to as the Graphics Processing Unit (GPU). As GPU programming has progressed, a growing research field into applying non-graphical algorithms onto the GPU has started. Image processing, numerical equations and illumination computation are some examples of what is called General Purpose GPU programming. We performed a computational study of dead reckoning comparing the GPU with the Central Processing Unit (CPU). We tested various quantities of simulated entities using a variety of CPUs and GPUs. GPUs have the possibility of dead reckoning millions of entities in a single pass, but suffer the requirement of data readback from the video card, which is often slower than “outbound” data transfer. The study is presented and then analysis of the results discussed.
2007 Paper No. 7437
Information Sciences Institute, Univ. of So. Calif.
Marina del Rey, California
The simulation community has often been hampered by constraints in computing: not enough resolution, not enough entities, not enough behavioral variants. Higher performance computers can ameliorate those constraints. The use of Linux Clusters is one path to higher performance; the use of Graphics Processing Units (GPU) as accelerators is another. Merging the two paths holds even more promise. The authors were the principal architects of a successful proposal to the High Performance Computing Modernization Program (HPCMP) for a new 512 CPU (1024 core), GPU-enhanced Linux Cluster for the Joint Forces Command’s Joint Experimentation Directorate (J9). In this paper, the basic theories underlying the use of GPUs as accelerators for intelligent agent, entity-level simulations are laid out, the previous research is surveyed and the ongoing efforts are outlined. The simulation needs of J9, the direction from HPCMP and the careful analysis of the intersection of these are explicitly discussed. The configuration of the cluster and the assumptions that led to the conclusion that GPUs might increase performance by a factor of two are carefully documented. The processes that led to that configuration, as delivered to JFCOM, will be specified and alternatives that were considered will be analyzed. Planning and implementation strategies are reviewed and justified. The presentation will then report in detail about the execution of the actual installation and implementation of the JSAF simulation on the cluster in August 2007. Issues, problems and solutions will all be reported objectively, as guides to the simulation community and as confirmation or rejection of early assumptions. Lessons learned and recommendations will be set out. Original performance projections will be compared to actual benchmarking results using LINPACK and simulation performance. Early observed operational capabilities of interest are proffered in detail herein.
2007 Paper No. 7267
Systems Engineering & Assessment Ltd
Bristol, United Kingdom
UK MOD DE&S-Sea Systems Directorate
Bristol, United Kingdom
The simulation of aircraft launch and recovery operations from naval vessels provides a unique set of challenges, requiring realistic modelling of the interactions between the air vehicle, the ship platform, and the environment. The aim of the UK Ship/Air Interface Framework (SAIF) programme is to use the industry standard High Level Architecture (HLA) to provide a realistic real-time simulation of the dynamic interface between the ship and the air vehicle. The initial phase of the project has developed a Ship/Helicopter Operating Limit (SHOL) prediction capability, utilising a networked version of the Merlin helicopter flight simulator at the Royal Naval Air Station (RNAS) Culdrose, UK. By developing an accurate and validated simulation capability, the results of simulation and flight test trials may be combined to maximise the aircraft’s operating envelope. The SAIF architecture is highly flexible, and can be adapted to support the modelling of both fixed and rotary wing launch and recovery operations, including Maritime Unmanned Air Vehicle (MUAV) concepts. This paper summarises the development, test and validation of the SAIF architecture, and highlights where the programme is aiming to make further fidelity improvements. Of particular importance is the highly complex real-time modelling of the airwake field around the ship, which can directly affect the level of pilot workload required to safely operate the air vehicle.
2007 Paper No. 7378
Science Applications International Corporation
Blue Sky Computer Systems, Inc.
The Battle Lab Collaborative Simulation Environment (BLCSE) federation is the Army Training and Doctrine Command’s (TRADOC) biggest federation to serve the Army’s analytical community. BLCSE has a large, complex, federation-of-federations architecture consisting of 29 different constructive and virtual simulations at 14 geographically distributed sites. The current BLCSE technology environment is comprised primarily of the Distributed Interactive Simulation (DIS) as the primary inter-federate communications protocol. DIS interoperability standards were developed in the late 1980s to support the linkage of simulations exchanging low entity-count data, principally entity-state messages between virtual training devices (e.g., SimNet devices). Active entity counts within BLCSE federations have been steadily increasing as federations grow to support more comprehensive analyses. BLCSE has reached a point where DIS protocol communications cannot reliably manage the federation message load without an externally managed message distribution management scheme. The effects of DIS message saturation, either on the network or at the application itself, are lost messages or incorrectly sequenced messages. Both problems lead to entity state anomalies and lowered data reliability. In view of these challenges, Army Capabilities Integration Center’s (ARCIC) Simulations Division Director approved a Simulations Division initiative, in May 2005, to transition the BLCSE federation from DIS (IEEE 1278) to Higher Level Architecture (HLA -IEEE 1516) interoperability standards. However, TRADOC plays an important role the Army’s Cross Command Collaboration Effort (3CE) organization. The 3CE organization currently adopted the Department of Defense (DoD) HLA NG 1.3 standard. In order to provide interoperability with 3CE federation, BLCSE had to implement the NG 1.3 protocol as an intermediate solution. After a year and a half of effort, 20 BLCSE federates are able to communicate in the HLA 1.3 environment. To complete the projects’ goal…
2007 Paper No. 7216
Raytheon Virtual Technology Corporation
Achieving simulation interoperability between autonomous federations is always a challenging problem. Despite the fact that different federations might accomplish seemingly similar tasks, they frequently implement solutions using drastically different approaches. A recent federation bridge development project implemented a unique approach to federation interoperability between differing Run-Time Infrastructure (RTI) solutions, Federation Object Models (FOMs), and federation level protocols. The ability to provide interoperability between two High Level Architecture (HLA) federations in a single software process using different versions of the RTI allows for an interoperability solution that requires no implementation changes to either federation while demonstrating the collective benefits combining the two federations. Providing interoperability between two HLA federations in a single software process using different versions of the RTI poses a unique challenge, as one normally cannot compile and link an application in this way. This challenge can be overcome using a specialized proxy that enables different versions of the RTI to simultaneously coexist in a single software process. This paper details the technological approach of using such a proxy for a federation bridge, including its applicability, architecture, and performance characteristics. The approach is proven via the successful implementation of a federation bridge that enables interoperability between two federations using the DMSO 1.3 NG v4 and Raytheon VTC NG Pro v2.0.4 RTIs. Examples of using the techniques presented in this paper in other situations are also given, as well as alternative approaches.
2007 Paper No. 7421
The MITRE Corporation USJFCOM J7
AEgis Technologies Group Inc. USJFCOM J7
As simulation users adopted the High Level Architecture (HLA) to promote interoperability, composability, and reuseability, Federation Object Model (FOM) development and use necessarily grew apace. HLA federations have in many cases delivered on these promised “ilities” yet a simulation fortunate enough to be a member of multiple federations often does not realize these same benefits. Membership in multiple federations requires that the individual federate interoperate with multiple FOMs. This in turn usually equates to the federate developing multiple interfaces with limited opportunity for reuse. The Modeling and Simulation (M&S) Community has recognized this issue and sought its redress through composable object model approaches such as the Base Object Model (BOM) technology. This paper reports on work accomplished under the auspices of United States Joint Forces Command (USJFCOM) to decompose the FOMs used by the Joint Warfighting Center (JWFC), identify and eliminate redundant elements, and develop a composite Joint FOM. The effort is intended as a “proof-of-principle” on the basis of which USJFCOM might solicit broader community support in developing an object model library and process for composing FOMs for use by the Joint and Multinational M&S community.
2007 Paper No. 7259
Alion Science and Technology
In 2006, the United States Joint Forces Command (US JFCOM) Joint Innovation and Experimentation J9 Directorate conducted the Urban Resolve 2015 (UR2015) Experiment. UR 2015 was designed to examine specific solutions to the challenges that will likely confront U.S. military forces in the future urban environment. This “human in the loop” experiment provided training for senior military personnel in decision-making processes by stimulating real-world Command, Control, Communication, Computer, and Intelligence (C4I) systems using an array of simulation technologies. The experiment involved more than 1,000 people at 19 different sites across the United States. It featured extensive use of modeling and simulation (approximately 30 individual simulations including Joint Semi-Automated Forces (JSAF) and OneSAF Testbed (OTBSAF)) running on over 450 computers to create a robust virtual environment that replicated what the urban environment may be like in the future after a major crisis has occurred. This paper will begin by providing background information on the numerous sites and applications that had to come together to create the UR 2015 federation. Additionally, it will examine the tasks required to integrate these sites and analyze not only the successes, but just as importantly the problem areas encountered. This paper will conclude with guidelines and recommendations for streamlining complex integration efforts when incorporating numerous, diverse simulations distributed over a large number of participating sites.
2007 Paper No. 7266
The development of high quality initialization data supporting correlated environmental representations, force structure, and targeting is one of many challenges joint training transformers must resolve. There are several impediments limiting resolution of this initialization data challenge. For one, the scarcity of source data at quality and resolution to support system initialization drives data producers toward antiquated processes that are manpower intensive, error prone, and cost-prohibitive to most programs. Compounding this limitation is the plethora of legacy systems with custom data requirements that impede interoperability and consume resources which might otherwise accelerate standards convergence. Transformational systems to correct these problems will not be available for almost a decade, and thus legacy systems will remain for the foreseeable future. There are a number of initiatives holding promise, but convergence has been painfully slow.
To break this cycle, the Joint Rapid Scenario Generation (JRSG) team formulated a series of spiral development and technology demonstrations now called the Joint Training Data Services (JTDS). The first two JTDS spirals sought to harmonize a wide range of existing and emerging initialization systems focused on terrain/geospatial data and force structure data. In 2007, the spiral demonstrations integrate force structure, terrain, and begin to address correlated targeting data. JFCOM is partnering with the National Geospatial-Intelligence Agency (NGA) and plans to extend this partnership to the Defense Information Systems Agency (DISA) and other interested agencies to extend and leverage these JTDS spirals into a pilot project on the Global Information Grid (GIG). This pilot would seek to provide initialization data to systems supporting the Range of Military Operations. Many of the processes that are currently performed manually would be automated. Perhaps most importantly, this pilot project will foster stronger relationships between the Command, Control, and Intelligence (C2I), Modeling and Simulation (M&S), and other communities of interest in order to accelerate convergence.
2007 Paper No. 7165
AT&T Labs Research
Florham Park, NJ
Scientific Research Corporation
Both training and testing require accurate simulation of direct fire engagements such as rifle shots or tank main guns. Today’s systems use lasers to transfer shot information from shooter to targets. The limitations of lasers are well known, and these limitations detract from training and testing realism. The U.S. Army’s One Tactical Engagement Simulation System (One TESS) program is working to improve this state of affairs by augmenting or replacing lasers with ”electronic bullets”, information packets transferred by wireless networking between shooter and targets. Such packets contain sensor information including shooter’s position and weapon orientation at the time of the shot, allowing geometric pairing calculations to determine who would be hit by the shot. However, while pure geo-pairing is the future goal, sensors are not yet accurate enough to support pure geo-pairing that is more accurate than laser-based systems. Hybrid approaches combine lasers with e-bullets, in an attempt to improve laser results by fusing e-bullet-conveyed imperfect sensor information with laser packet information. The goal of this study is to compare several of these candidates in order both to determine what approach is most accurate today, as well as to estimate when sensors will be accurate enough for pure geo-pairing to replace laser-based solutions in the future. Our conclusions are based upon an extensive simulation study of several extant approaches.
2007 Paper No. 7122
Southwest Research Institute
San Antonio, Texas
Tactical data links are critical to network centric battlefield planning and execution. There are many legacy data links that must be optimized and integrated into modern battle spaces. Implementations of data links have been platform-centric with limited regard to how other military assets could use or process the data to be transmitted. There have been some attempts to catalog each platform’s implementation, but little has been done with the data to support automated planning and evaluation of data link performance or levels of interoperability. This paper describes an investigation into alternative methods for simulation of data links to support planning, design, and implementation of tactical data links. Data link simulations created to date have focused on performance and interoperability at the physical layer while modeling data and information flow at a statistical level only, relying on reference implementations of military standards. The methods investigated and presented in this paper seek to use existing physical layer data link models while using actual documented platform implementation data to develop accurate aircraft communication and information exchange models. These accurate aircraft data link implementation models, when coupled with equally accurate aircraft motion and behavior models, will allow true interoperability and information flow analysis without prolonged post-integration flight testing. The approach has considerable potential impacts in the areas of platform integration, training simulations and joint interoperability testing.
2007 Paper No. 7323
NASA/Glenn Research Center (GRC)
Since the Vision for Space Exploration (VSE) announcement, NASA has been developing a communications infrastructure that combines existing terrestrial techniques with newer concepts and capabilities. The overall goal is to develop a flexible, modular, and extensible architecture that leverages and enhances terrestrial networking technologies that can either be directly applied or modified for the space regime. In addition, where existing technologies leaves gaps, new technologies must be developed. An example includes dynamic routing that accounts for constrained power and bandwidth environments. Using these enhanced technologies, NASA can develop nodes that provide characteristics, such as routing, store and forward, and access-on-demand capabilities. But with the development of the new infrastructure, challenges and obstacles will arise. The current communications infrastructure has been developed on a mission-by-mission basis rather than an end-to-end approach; this has led to a greater ground infrastructure, but has not encouraged communications between space-based assets. This alone provides one of the key challenges that NASA must encounter. With the development of the new Crew Exploration Vehicle (CEV), NASA has the opportunity to provide an integration path for the new vehicles and provide standards for their development. Some of the newer capabilities these vehicles could include are routing, security, and Software Defined Radios (SDRs). To meet these needs, the NASA/Glenn Research Center’s (GRC) Network Emulation Laboratory (NEL) has been using both simulation and emulation to study and evaluate these architectures. These techniques provide options to NASA that directly impact architecture development. This paper identifies components of the infrastructure that play a pivotal role in the new NASA architecture, develops a scheme using simulation and emulation for testing these architectures and demonstrates how NASA can strengthen the new infrastructure by implementing these concepts.
2007 Paper No. 7226
Naval Air Warfare Center Training Systems Division
Existing bridging technologies such as Live Radio Bridges (LRB) and Virtual Tactical Bridges (VTB) successfully exchange transmissions between live and virtual communications assets. However, these technologies require a dedicated operational radio to serve as a relay for each circuit bridged. The one-to-one relationship between an operational relay and bridged circuit, in conjunction with the associated costs and restricted availability of operational radios, continues to constrain exercise planners. A two-year research effort, conducted by the Concept Development and Integration Laboratory (CDIL) at the Naval Air Warfare Center Training Systems Division (NAWCTSD) in Orlando, Florida, has resulted in the development of advanced capacity prediction methodologies coupled to a prototype Integrated Live to Virtual Communications Server (ILVCS). The ILVCS serves to reduce the operational resources required to bridge live and virtual communications during a Live, Virtual, Constructive (LVC) training event by utilizing a single relay for multiple bridged circuits. This paper will discuss the systems used to address issues such as latency, degradation and loss while allowing for real time control and switching of communications resources. Topics discussed will include techniques for achieving acceptable latency in live to virtual communications, hardware requirements for transceiver switch timing and radio frequency (RF) monitoring, and software requirements for real time control and management of the operational resources required to bridge live and virtual communications.
2007 Paper No. 7050
Salt Lake City, Utah
Virtual environment databases have traditionally included local areas of terrain, textured with either geo-specific photographic imagery or with geo-typical repeating imagery. In recent years, however, continuous whole-earth terrain skinning algorithms have replaced the limited local-area terrain models. These algorithms have elevated the need for corresponding continuous whole-earth texturing mechanisms. While continuous whole-earth image datasets are available at 10-15 m resolution, they are costly, storage intensive, and too coarse for a wide variety of training tasks. Synthesizing higher-resolution imagery offers an attractive alternative, both in terms of cost and training utility. A technique for run-time synthesis of whole-earth high resolution terrain imagery is described. Attention is paid to minimizing unnatural repetition and other artifacts. This technique includes run-time nested blending of multiple high resolution photographic insets. The correlation of synthetic terrain texture with 3D feature decoration is also discussed.
2007 Paper No. 7116
Applied Research Associates
When designing a synthetic environment terrain database format, developers face a tradeoff between physical storage, runtime performance, and data accuracy. The context of the simulation and particularly its specialized requirements heavily influence how the tradeoffs are made. One of the largest historical driving factors in how this balance has been struck has been the “domain” context. The virtual and constructive training domains drove most of the modern terrain format development. However, the requirements for live training are often significantly different. For example, the OneTESS player units allow minimal storage, require a small memory footprint, and necessitate a high degree of ground truth accuracy. The requirements satisfied by existing terrain formats fail to meet these requirements. OneTESS requires terrain resolution far beyond anything handled by previous “high end” simulations. However, OneTESS requires far fewer terrain services than traditional virtual and constructive systems. This duality makes OneTESS’s extreme representation requirements attainable - the tradeoffs between time, space, and accuracy is balanced in the context of a single, high-importance function. Furthermore, OneTESS must execute on a handheld player unit possessing highly limited resources and performance capability compared to current desktop workstations. In this paper, we discuss the OneTESS terrain requirements and the rationale for needing its own representation. We introduce a new terrain format specifically targeting the OneTESS live training and test domains. We describe its design and implementation and report the preliminary performance benchmarks of terrain services developed for this new terrain format. We conclude with ongoing efforts and future directions.
2007 Paper No. 7113
The primary objectives of the Naval Aviation Simulation Master Plan Portable Source Initiative (NPSI) are to increase visual database reuse, promote standardization, and lower life cycle acquisition costs for new system acquisitions, legacy platform trainer procurements, and major trainer visual upgrades. The NPSI datasets capture the prepared/corrected/refined visual source data in standard formats for reuse by other platforms. The NPSI datasets include imagery, elevation data, feature data, 3-D models, and metadata. The datasets are stored in the NPSI Archive, which currently contains three NPSI datasets along with additional imagery layers. In addition, there are several procurements underway that will deliver enhanced or new NPSI datasets. The intent of this paper is to propose quality assurance testing procedures and standards for examining NPSI Datasets for placement into the archive. The quality assurance suite of tests will involve the various layers and the metadata that combine to make a NPSI Dataset. The testing will be utilized to evaluate datasets for compliance, to determine how the data will be archived and to provide information to evaluate the data for future reuse. NPSI datasets, and the results of quality assurance testing, will be made available to contractors at Request For Proposal (RFP) to allow the contractor to better evaluate the NPSI Dataset against program requirements, and make a realistic determination of data quality and potential for reuse, and assess additional effort required for each future program.
2007 Paper No. 7246
Swedish Armed Forces
This paper will present and cover what and why of a development possibly building a global reach and a cost-effective training capability for Forces transformation into Global Crises Response using a Persistent Partner Simulation Network. The purpose of the new Persistent Partner Simulation Network (P2SN) would be to provide capabilities to P2SN partners in support of education and training. P2SN will also establish capability standards “In the spirit of U.S. Joint National Training Capability”. Both these new concepts are based on the 1999 established Partnership for Peace (PfP) Simulation Network with all Lessons Identified and Learned in a number of related multinational events. Using a building block approach, the end state of the developed P2SN Training and Simulation establishments in NATO/PfP will be represented in an event driven P2SN Capability including an established set of operational requirements and an established set of system specifications.
The existing PfP simulation network is a set of protocols, standards, and processes needed to create the infrastructure and technical elements required to support a distributed simulation exercise. The protocols and standards enable Partner nations to create the hardware and software suites needed to participate in or lead exercise events while the processes enable those Partners to quickly establish the required organization and communications network. The PfP simulation network continues to identify the nodes within Partner and NATO nations that have the requisite systems that enable their participation in a distributed simulation exercise. This information is then used as a fundamental building block of an exercise. The primarily P2SN possible benefits identified are: • Contributes to partners supporting real world coalitions. • P2SN expose partners to the Joint National Training Capabilities and to the NATO Education Training Network standards. • Improves the interoperability in the Education and Training arena needed to have a positive impact on forming coalitions for real world operations. • Partnership sharing is within the framework of NATO/PfP. • Building national…
2007 Paper No. 7157
CAE USA, Inc.
Renaissance Sciences Corporation
As one of the tasking orders on the U.S. Army’s SE Core Database Virtual Environment Development program, the Common Virtual Components (CVCs) were envisioned as extensions to the database storage and production facilities of the program. As extensions, CVCs will provide added functionality as models which both are easy to use/integrate and are pre-validated. The Common Sensor Model (CSM) CVC has created a new software module that fits into this mold directly as it provides proven yet modular sensor effects simulation for virtually any image generator (IG) built on an OpenGL 2.0 platform. CSM was designed to be a drop-in module that combines the power of modern commercial-off-the-shelf (COTS) graphics processing unit (GPU) architectures with best of breed government-off-the-shelf (GOTS) sensor modeling approaches pioneered under the Night Vision and Electronic Sensors Directorate (NVESD) Night Vision Image Generator (NVIG), Air Force Research Laboratory (AFRL) SensorHost, and AFRL InfraRed Target Scene Simulation (IRTSS) programs. IGs can easily control CSM through a lightweight thread-safe C++ application programming interface (API). Design objectives focused on modular architectures which would be non-invasive to its host application’s scene rendering yet facilitate future incorporation of additional math models and new sensor types. These design objectives were realized, in large part, by utilizing a floating point frame buffer object (FBO) to cleanly separate the rendering of quantitative radiance scenes from the rendering of sensor effects. This paper will provide an overview of the design and inner working of the CSM codebase and will conclude with an example integration.
2007 Paper No. 7258
Stottler Henke Associates, Inc.
San Mateo, CA
To create the most effective possible simulations, domain experts must be able to author, monitor, and modify the behavior of simulated agents. Current computational models of autonomous agent behavior are not adequate in this regard. Simple hard-coded models still predominate in many areas, while the most capable and realistic behavior modeling architectures – such as SOAR and ACT-R – are also generally the most difficult to work with, requiring trained programmers to develop and update behavior models. We contend that to enable domain experts without programming expertise to author sophisticated agent behaviors, there are two main challenges that must be addressed: condition authoring and behavior analysis. Complex conditions – such as the preconditions for a step in a plan – are a necessary part of almost any behavior model, but specifying these conditions is not easy. Text-based authoring is an efficient way to enter the information, but the required syntax can be overwhelming to the non-programmer. Visual authoring methods, by contrast, are better able to guide non-programmers through the authoring process but tend to be much more time-consuming and laborious. The second major challenge is enabling non-programmers to analyze the runtime behavior of the models they create. Behavior models of any significant complexity require multiple “test and fix” iterations to uncover authoring mistakes. Modeling tools must therefore provide data visualizations that permit the non-programmer to see both global structure and specific details in the large volume of data generated by test runs of the behavior model. In addition, authoring tools must easily allow the creation of unit-test-like scenarios. We have spent the last three years developing an adversary behavior modeling tool for the Air Force, during which time we have attempted to address both of these challenges. We will present lessons learned and suggested best practices as well as areas for future work.
2007 Paper No. 7163
Rockwell Collins, Inc.
Salt Lake City, UT
Advances in display technology have provided a wide selection of display devices with an equal variation in display cost and performance. Large field-of-view display systems typically incorporate multiple display devices. Fast jet trainers for example might use ten or more projectors. Selecting the best projector and designing the system configuration while meeting end user requirements requires the ability to predict system performance. The system configuration includes eye point location, screen type and location, projector locations and lens characteristics. The most important performance characteristics are field-of-view (FOV), resolution, brightness and contrast. These all interact such that a change which improves one parameter almost always reduces performance elsewhere. This makes display system design an iterative process. Therefore it is imperative the system designer have tools which accurately and rapidly predict final system performance. Projector manufacturers and system integrators have developed tools for this purpose. Part one of this paper discusses the mathematics used to predict FOV, resolution, brightness, and contrast. FOV can be determined from system geometry and lens characteristics using simple vector analysis. Resolution is determined from FOV, pixel format, and system MTF. Brightness depends on screen coverage, screen gain, and projector light output. In the past contrast was predicted based on measurements of previous similar systems because the mathematical models are computationally intensive. The power of today’s PCs makes it possible to predict contrast, but the inability to model all aspects of the final system limits accuracy. Part two provides a survey of tools used by projector manufacturers and systems integrators, including the tools used at Rockwell Collins. Typically these tools do more than predict the parameters discussed above, and extra features will be discussed. Finally some examples of how these tools are used are given.
2007 Paper No. 7187
The Boeing Company
St. Louis, MO
Because scripting languages provide great flexibility, programmers have begun to use them more frequently within software programs. In the context of training systems, the ability to tailor the software to the needs of the user rather than relying on a static implementation allows for creation of software that facilitates a very agile training curriculum that is easily adaptable to meet the needs of students. As these scripting languages are used more frequently in time-critical applications, such as real-time training devices, it is important to assess their effects on the overall speed and performance of the software. In this paper, I will highlight areas in which scripting languages can assist in providing software that easily adapts to a dynamically changing training environment. I will also discuss strategies for embedding these scripting languages while avoiding negative impacts on real-time performance. Finally, I will analyze and report on the performance of these scripting languages in existing training environments.
2007 Paper No. 7403
ASC Capabilities Integration Directorate
Air Force Institute of Technology
ARFL Collaborative Simulation Technology
The field of distributed virtual simulation has typically been associated with training human operators. While training is still a principle design goal, large scale distributed virtual simulations are increasingly being used to analyze assets within the simulation itself. In other words, the trend is to use distributed virtual simulations for the purpose of solving more analytic simulation problems. This paradigm shift requires more formal methods to ensure that requirements from both human participants and analytic models are being satisfied.
Considerable research has been done to capture human interaction requirements which determine the virtual environment that needs to be created, but little research has been done to characterize distributed virtual simulations in general, especially when analytic model requirements need to be considered. This paper will present a framework to characterize distributed virtual simulations in terms of a temporal data consistency model so that the performance and scalability of system designs can be estimated. It also presents initial performance and scalability results for a DIS-based simulation system.
2007 Paper No. 7235
Science Applications International Corporation
With the advent of the OneSAF Objective System (OOS) and its model composability, the catalyst again presents itself to create models in OOS – and in many other object-oriented simulation applications – that correctly observe object-oriented programming principles and at the same time offer utility in cross-domain applications. Typically, models are written to support a specific application domain’s use. For example, a model of a Bradley Fighting Vehicle (BFV) might have three variants: one for training (low fidelity, Lanchestrian engagement adjudication), one for analysis (medium fidelity, deterministic engagement adjudication), and one for R&D (engineering-level fidelity, physics-based engagement adjudication). Combining the concepts of code reuse through inheritance and polymorphism (implemented as “composability” in OOS) with the other capabilities of object-oriented software development, it is possible to create a single model of a BFV that can operate in at least two of those application domains (training and analysis), and potentially all three, without recoding for a specific application. This cross-domain model would have public attributes (referred to as a Simulation Object Model in the parlance of IEEE 1516 High Level Architecture standards) that could be selectively accessed to support the required use. This paper discusses the object-oriented software technology that enables this approach, providing specific examples of code that represent the approach, and presents the functional trade-offs that this approach entails.
2007 Paper No. 7243
Dignitas Technologies, LLC
The Synthetic Environment (SE) Core program is developing a virtual simulation architecture and common virtual components to improve reuse, interoperability, and efficiency across virtual simulations. As these long-term objectives are being worked, SE Core is addressing the immediate integration of a common semi-automated forces (SAF) system, OneSAF, into two pre-existing virtual programs: Close Combat Tactical Trainer (CCTT) and Aviation Combined Arms Tactical Trainer (AVCATT). This paper discusses the most complex aspect of OneSAF integration into CCTT and AVCATT, namely replacement of current terrain databases and terrain services with OneSAF's Environment Runtime Component (ERC). ERC integration will allow CCTT, AVCATT, and OneSAF to share a common terrain format, in contrast to the three differing formats used currently. Because CCTT and AVCATT use their terrain databases across components, the integration extends to manned simulators and other system components. The use of common software will allow future improvements to be shared across programs, while providing a springboard for extensions in CCTT and AVCATT functionality. Reuse of common software is often difficult and this task is further complicated by the fact that the reuse crosses domains: OneSAF's ERC is constructive, while the selected early adopters are both virtual. Challenges to be discussed in this paper include co-development on a common product, performance, database format and representation issues, specialized functionality, and resolving fundamental differences in interface styles.
2007 Paper No. 7304
Science Applications International Corporation
Training and Doctrine Command (TRADOC) is executing its plan to replace its primary entity driver in the Battle Lab Collaborative Simulation Environment (BLCSE). Replacing the existing multipurpose OneSAF Testbed Baseline (OTB) functionalities with OneSAF Objective System (OOS) will transition Army experimentation in the Advanced Concepts and Requirements domain to a fully capable environment for the study and testing of Future Combat Systems (FCS) capabilities. Because BLCSE maintains an aggressive analytical experimentation schedule, the transition from OTB to OOS must be completed in a short timeframe while preventing loss of functionality for remaining BLCSE federate applications. This paper discusses the technical issues associated with BLCSE’s SAF replacement process, ranging from entity driver replacement to simulation message protocol adaptation. The paper specifically describes near-term activities associated with identification and resolution of interoperability issues and functionality gaps within a large-scale, highly-distributed simulation environment. In addition, the paper discusses potential enhancements to the BLCSE environment made possible by the integration of OOS, including behavior and modeling flexibility, varying entity fidelity and the introduction of OOS-based servers and tools.
2007 Paper No. 7307
Concurrent Technologies Corporation
We report on the development of an agent capability for operational decision-making within a real-time simulation world. A multiple agent system was developed to extend the native behaviors of entities (force units, vehicles, etc.) in tactical simulations. The system endows these entities with intelligent behavior capabilities allowing them to adapt to unexpected scenario situations. The agent system is designed to integrate tightly with the Semi-Automated Forces (SAF) simulators used in live-virtual-constructive simulation environments by DOD and others. Large-scale simulations often entail the necessity of human operators to direct or fill in the ongoing behavior of force units or other entities not being played by trainees or others in the scenario. Force Behavior Agents (FBA) eliminates this staffing requirement, achieves realistic conflict scenarios and, at the same time, simplifies the specification of complex mission scenarios rich in force interaction and variability. In contrast to federate level interaction in High Level Architecture (HLA) communication, FBA is designed to integrate directly with simulators at the fine-grained level of native task frame stacks and simulation state databases. Agent interaction with the simulator’s state machines affords the means to adjust unit behavior, including disaggregation, transparently without disrupting normal simulator operations. Selection of alternative behavior tasks during runtime is governed by agents using situation look-ahead trials based strictly on the force unit’s qualified sensor capabilities. These look-ahead trials are like sketchy simulations run by the individual agents to find the best alternative courses of action, much as a human commander will survey and compare the tactical options available to his unit. Ontologies define the tactical relations and doctrinal constraints on tasks, and a commercial agent platform provides the decision making environment. An early form of the FBA decision maker and its interface with Joint Semi-Automated Forces (JSAF) simulators was demonstrated to the Joint Forces Command (JFCOM).
2007 Paper No. 7018
DSTO, Australian Department of Defence
Canberra, ACT, Australia
Melbourne, Victoria, Australia
Royal Australian Air Force (RAAF) Simulation Roadmap (2007 – 2017) is being developed to identify specific opportunities for simulation and readiness management. The Australian Defence Force (ADF) has defined a vision that “Defence exploits simulation to develop, train for, prepare for, and test military options for Government wherever it can enhance capability, save resources, or reduce risk”. The ultimate objective of this RAAF Simulation Roadmap is to produce and support a Distributed Simulation, Training and Experimentation, Synthetic Range Environment that implements this ADF vision. The RAAF Simulation Roadmap describes the main concepts and technologies to be used in such a RAAF synthetic range system and recommends a program of research over the period 2007 to 2017 to develop such a system. This paper presents an overview of some of the research carried out so far, upon which the RAAF Simulation Roadmap is based, including: • The concept of the synthetic range, whereby ADF real-world, operational military platforms, training and experimentation simulators and/or simulation systems can seamlessly interoperate with each other, that is currently being developed; • Which distributed simulation (eg DIS, HLA or TENA), radio/intercom communications and tactical data link protocols, technologies, gateways and standards need to be adopted and why. Interoperability between RAAF systems, other ADF service and coalition partner systems has also been taken into consideration; • The real-world, operational platform and simulation architectures that enable such synthetic range systems to seamlessly interoperate with each other; and • Some of the innovations and lessons learned so far in the development of this interoperable, RAAF/ADF, training and experimentation synthetic range environment.
2007 Paper No. 7220
U.S. Army PEO STRI
Live, Virtual, Constructive (LVC) interoperability can be defined as the ability for assets, models, and effects from one training environment to be seen, affect, and be affected within the rest of the training environment. LVC interoperability has been implemented in a number of different ways for a number of years where most of the approaches integrate LVC assets through defined protocols, various gateways or translators, and a set of messaging collection tools. To a much lesser extent, some implementation approaches also develop a common object model and middleware, and use a set of system engineering and business practices that drive a given particular LVC solution. The U.S. Army Program Executive Office (PEO) Simulation Training and Instrumentation (STRI) is taking those basic principles and practices and applying them on specific, relatively new Live, Virtual, and Constructive simulation product lines attempting to influence their design early in their development cycle by exploring options that could yield a more robust, systematic LVC interoperability solution set. This paper provides an overview of several LVC assets within the PEO STRI product lines and their respective Live, Virtual, and Constructive domain common components, and how they are being integrated to address current and future LVC training needs by the Army and DOD. In particular, the paper will focus on the Army “Live” training product line, and describe how interfaces, standards, and training methodologies are being developed to support specific LVC use cases required by the “Live” training community. This paper will also provide lessons learned, challenges encountered, and recommended way ahead from a “Live” perspective.
2007 Paper No. 7261
MWTB, Ft Knox, KY
The Counter Insurgency (COIN) Experiment was performed in March 2007 using a distributed network. It was focused on simulating urban operations in Central Asia in 2015. A major goal of the experiment was to demonstrate the use of a complex Models and Simulation federation to train and evaluate doctrine for a Counter Insurgency Environment. Participating federates included OFOTB, FireSim, JSAF, CultureSim, EADSim, CMS2, Universal Controller, ACRT, ACRT-DR, JNEM, ISM, SAServer, MC2, CERDEC CES, AOIServer, EffectsServer, Reporter, DataLogger, SEAMS. This was an entity-level distributed simulation event that included sites at Ft Knox, Ft Sill, Ft Bliss, and Huntsville, using the DIS and HLA protocols. Approximate entity counts included 1000 US vehicles and soldiers, 1000 Local Police and Army, 1200 insurgents, and 20,000 civilians from various population groups. Several new and enhanced models contributed to the richness of the COIN environment. A Force model was developed that allowed each station to control its rules of engagement, crucial for a situation where the enemy depended on who and where you were. A model of uniformed entities versus plain clothes was added since insurgents don't generally show themselves as such. JNEM/ISM provided real-time feedback on the mood of the various civilian population groups. A new model of IEDs was developed that simulated several trigger types, decoys and countermeasures. Suppressive effects were added including non-lethal rounds. The area-of-interest model was improved to allow good simulation performance in a dense urban environment. The terrain database had 10000 fully modeled multi-elevation buildings along with 650,000 volume buildings.