11-20, 09:20–09:25 (Pacific/Auckland), WG403
A metaverse-based training engine is developed to enable high-precision 3D simulations for scientific training environments, enhancing decision-making and spatial analysis through realistic geospatial data and interactive tools.
Development of Core Technologies for a Metaverse-Based Training Engine in a Scientific Training Environment Platform
Joohyuk Park, Kwangin Han, Hyeongi Min, Sanghak Kim, Geunha Kim, Chaeyoun Lee
Sundosoft Co., Ltd.
- Introduction
Traditional 2D geospatial systems present significant limitations in the context of real-time training, simulation, and decision-making. With increasing needs for immersive and spatially rich environments in areas such as disaster response, defense training, and urban planning, metaverse-based platforms leveraging high-resolution 3D data have become essential.
This paper introduces a core technology platform developed to support metaverse-based scientific training environments. It integrates multi-source 3D spatial data, interactive modeling tools, and web-based visualization interfaces, creating an advanced simulation engine for scenario planning, operational training, and geospatial analysis.
- Objectives
The primary objectives of this project are:
To overcome the limitations of flat 2D mapping by enabling realistic and immersive 3D simulations.
To support high-resolution modeling of terrains, buildings, roads, and operational assets.
To provide web-based access to a 3D simulation environment for use in policy-making, training, and public services.
To integrate spatial data into metaverse engines for enhanced collaboration and planning.
To promote public–private–academic collaboration through scalable, shareable data and open APIs.
- Data Sources and Processing
A variety of spatial data sources were employed to construct the 3D models, including:
LiDAR point cloud data: Captured via drone or aerial platforms to model surface elevation and object geometry.
Orthophotos and aerial imagery: Used for texture generation and spatial referencing.
Vector map data: Used for defining roads, building footprints, and infrastructure.
Custom field survey data: To fill gaps in public datasets and validate terrain accuracy.
These datasets were fused using a hybrid data processing pipeline. The result was a seamless digital terrain model (DTM) and digital surface model (DSM), which served as the foundation for object and unit placement within the metaverse environment.
- System Components
The system comprises five primary components:
4.1 3D Terrain/Object Editing Module
This module allows users to generate and modify terrain and object layouts. Features include:
Topographic surface editing (elevation, slope adjustment)
Road/pathway creation using spline tools
Object positioning and orientation
External model import/export using CSV format
4.2 Unit and Scenario Modeling
A central feature of the platform is its ability to simulate operational units (e.g., armored vehicles, troops) and define scenario-based behaviors:
Each unit can be assigned movement paths, states, and display attributes.
Painting schemes or external features of vehicles can be modified interactively.
Training sequences can be saved and replayed for review or evaluation.
4.3 Web-Based 3D Viewer and API
A web-based 3D viewer was developed using CesiumJS and WebGL technologies. It allows remote users to:
Visualize terrain and unit deployment in real-time
Interact with editable 3D objects via browser
Access scenario data through RESTful APIs for integration with third-party platforms
4.4 Metaverse Operational Tool (CSCI)
The “Metaverse Integrated Operational Tool,” codenamed CSCI, acts as the main interface for simulation control:
Displays the operational situation in a real-time 3D environment
Provides an overview of unit lists, terrain zones, and scripted events
Enables data exchange with external systems through API gateways
Supports planning, monitoring, and post-exercise analysis
4.5 UI/UX Design
The user interface is designed to balance complexity and usability. Major UI features include:
Top Toolbar: For simulation playback, camera control, and object visibility
Side Panels: To manage lists of operational units and active objects
3D Display Area: Central workspace for real-time spatial interaction
External System UI: Supports access to other spatial or simulation databases
- Application Scenarios
The platform is intended for cross-domain deployment. The following use cases demonstrate its versatility:
Military Training: Realistic simulation of urban combat or tactical missions using real-world terrain and movable units.
Disaster Preparedness: Planning emergency responses for earthquakes, floods, and wildfires with modeled risk zones and evacuation routes.
Urban Planning: Simulation of infrastructure expansion, zoning changes, or traffic flow optimization.
Environmental Education: Interactive 3D modules for terrain analysis, land cover study, and climate change impact visualization.
Each scenario benefits from customizable data layers, editable units, and immersive interaction, enhancing training fidelity and decision-making accuracy.
- Results and Evaluation
Initial testing demonstrated that the platform can handle large-area terrain datasets (>50 km²) and render them interactively within web browsers. The real-time responsiveness and modularity of the simulation engine were validated through internal scenario drills.
User feedback from early adopters in the disaster response and defense sectors highlighted:
Improved situational awareness through 3D visualization
Enhanced engagement and learning during training exercises
Flexibility in configuring and deploying custom scenarios
Strong potential for interoperability with digital twin systems
- Discussion and Challenges
While the platform demonstrates high performance and flexibility, several challenges remain:
Data Standardization: Integrating heterogeneous spatial datasets from multiple agencies requires harmonization and metadata control.
Real-Time Scalability: Future deployments will require support for concurrent users and larger datasets, demanding optimization of rendering pipelines and network infrastructure.
AI Integration: The system would benefit from AI-based decision support, such as automated unit behavior or hazard prediction based on real-time data feeds.
Interoperability: Continued effort is needed to align with standards like CityGML, 3D Tiles, and OGC WFS/WMS.
- Conclusion and Future Work
This study introduced a scalable, browser-accessible platform that combines high-precision geospatial data with metaverse-based simulation tools. It enables immersive, flexible training and planning across diverse domains.
Future plans include:
Expansion of supported data formats and 3D model libraries
Integration with live sensor feeds and AI-powered analytics
Deployment in public safety agencies and educational institutions
Full support for VR/AR hardware to enhance immersion
By providing an open, extensible framework, this platform contributes to the growing ecosystem of geospatial metaverse applications, supporting smarter, more informed decision-making in both public and private sectors.