Supported Simulation Technology
We intentionally build on proven, industry-standard technology—off-the-shelf game engines and XR frameworks that are widely adopted in the game and simulation communities—rather than proprietary, self-rolled stacks. That approach reduces long-term risk, speeds development, and supports “author once, deploy anywhere” delivery across platforms such as Windows, iOS, Android, WebGL, and visionOS.
When training demands more than a standalone app, we integrate the pieces that make simulations truly job-ready: real-world OEM controls and peripherals, networked multiuser collaboration, and performance tracking that supports coaching and continuous improvement. And as AI capabilities mature, we’re bringing voice- and knowledge-enabled guidance into simulation experiences where it meaningfully improves outcomes.
At the center of this delivery model is ForgeSIM™, our modular framework built to standardize the core patterns behind production simulation systems (interaction, flow control, and scaling features), so each new simulator starts from a battle-tested foundation instead of reinventing the basics.
ForgeSIM
20+
Years ExperienceMore than two decades of hands-on experience developing high-quality simulation-based training solutions for enterprise and industry.
8
Core ModulesEight modular framework components designed to work together seamlessly, giving each project the flexibility to use only the systems it needs.
600+
DeploymentsProven through more than 600 real-world simulator deployments, demonstrating the scalability, reliability, and practical value of the ForgeSIM approach.
0
License FeesForgeSIM is built to deliver powerful simulation capabilities without ongoing framework licensing costs, helping reduce total cost of ownership for customers.
– Dynamic Text Dynamic text animations for engaging UI and instructional content
– State Machine Flexible state management for controlling simulation flow and behavior
– Scene Explorer Enhanced scene navigation and hierarchy management within Unity
– Localization Multi-language support with Google Sheets integration for translation management
– CAD Import Transform static engineering models into interactive simulation-ready assets
– Behavior Matching Mirror real-world motion, physics, and mechanical constraints
– Equipment Modeling Faithful reproduction of heavy equipment, medical devices, and industrial machinery
– Cross-Project Reuse Standardized pipeline for consistent CAD-to-interactive conversion
– CAN-BUS Integration Connect physical joysticks, levers, and switches via standard protocols
– Custom Hardware Driver Inspector UI for integrating non-standard peripherals with test controls
– Multi-Device Input Unified input mapping across keyboard, gamepad, XR controller, and custom hardware
– Field-Ready Deployed with Vermeer drill simulators across 600+ global dealer locations
– SCORM Compliance layer for completion tracking, pass/fail, and LMS integration
– LTI Learning Tools Interoperability for seamless platform integration and roster management
– xAPI Rich behavioral telemetry capturing every learner action for data-driven improvement
– Training Analytics Dashboards tracking progress, knowledge gaps, and skill acquisition velocity
– Real-Time Sync Photon-powered multiplayer for collaborative training scenarios
– Instructor Mode Remote observation, assessment, and guided instruction capabilities
– Voice & Text Chat Integrated Vivox communication during training sessions
– Cross-Platform Unity Relay and Lobby services for seamless multi-device connections
– Guidance System Adaptive sequences with failure handling and contextual feedback
– Lesson Data Structured storage for steps, objectives, and assessment criteria
– Lesson Localization Specialized translation support for instructional content
– Guidance Observers Real-time monitoring of user interactions and dynamic difficulty adjustment
– MRTK Integration Seamless hand tracking, spatial mapping, and gesture recognition
– Spatial Interactions Natural hand gestures including air tap, hand ray, and direct manipulation
– HoloLens Support Optimized deployment on enterprise mixed reality hardware
– OpenXR Support Cross-device compatibility through unified standards
– In-Game Developer Console Real-time debugging, commands, and variable inspection during gameplay
– Inspector Notes Contextual documentation directly on Unity components for team collaboration
– Component Explorer Tree view of all scene components with search, grouping, and status bar
– PreBuild Hooks Platform-specific build automation for Pico, Quest, and Windows targets
Game Engines

Unity is a cross-platform game engine developed by Unity Technologies that is primarily used to develop video games and simulations for computers, consoles and mobile devices. ForgeFX relies heavily on the Unity game engine to produce simulation-based training products for our clients, and has been featured online by Unity in their Made with Unity Showcase, exhibited at the Unity AR/VR Vision Summit, and have had products we’ve developed for our clients featured in the Unity3D Showcase.

Unreal Engine is a powerful cross-platform game engine developed by Epic Games, widely used to create high-fidelity video games, real-time simulations, and interactive 3D applications across computers, consoles, and immersive devices.
ForgeFX leverages the Unreal Engine to build high-fidelity, simulation-based training products for our clients.
Platforms
By leveraging game engines to develop and deploy our training simulators, ForgeFX is able to support dozens of hardware and operating system platforms. Embracing the ‘author once, deploy anywhere’ strategy, ForgeFX s able to guarantee that our clients software will run on any number of current and popular computer platforms, as well as ensure that it will continue to run on future platforms as well.

Windows is Microsoft’s flagship operating system, and for many years was the de facto standard for all home and business computers. Microsoft Windows is the most popular end-user operating system, and all of our training simulators run on Microsoft Windows by default, as that’s the platform we do the majority of our development on.
Technically Windows is a metafamily of graphical operating systems. It consists of several families of operating systems, each of which cater to a certain sector of the computing industry with the operating system typically associated with IBM-based PC compatible architecture.

iOS is a mobile operating system developed by Apple to run exclusively on its own hardware, like iPhones and iPads. The iOS user interface and input system is based upon direct manipulation, using multi-touch gestures.
Second only to Android, iOS is the most popular operating system in the world. ForgeFX has developed a number of iOS-based training simulators, including the JLG Equipment Simulator, developed for JLG Industries.

Android is a mobile operating system developed by Google, designed for touchscreen mobile devices such as smartphones and tablets. Android’s user interface is mainly based on direct manipulation, using touch gestures that loosely correspond to real-world actions, such as swiping, tapping and pinching, to manipulate on-screen objects, along with a virtual keyboard for text input.
In addition to touchscreen devices, Google developed Android TV for televisions, Android Auto for cars, and Android Wear for wrist watches, each with a specialized user interface. These platforms extend the Android ecosystem beyond mobile devices, enabling developers to build versatile applications that can operate across a wide range of hardware. Android’s open-source nature and widespread adoption have made it the most popular mobile operating system globally, fostering a vast developer community and app marketplace.

WebGL (Web Graphics Library) is a JavaScript-based API for rendering 3D graphics within any compatible web browser without the use of plug-ins. WebGL is integrated into all the web standards of the browser allowing for graphics processing unit (GPU) accelerated usage of physics and image processing. WebGL programs consist of control code written in JavaScript and shader code that is written in OpenGL Shading Language (GLSL), a language similar to the C++ programming language, and is executed on a computer’s graphics processing unit (GPU).

VisionOS is the name of Apple’s newest operating system that powers the company’s soon-to-launch augmented reality headset. It is the first operating system built from the ground up for spatial computing. VisionOS makes use of the Vision Pro’s wide array of cameras to constantly blend the virtual and real world, delivering a stable and constant picture with floating UI elements that you can interact with. Use visionOS together with familiar tools and technologies to build immersive apps and games for spatial computing.
Apple has partnered with Unity to bring Unity apps to the new Apple Vision Pro. Popular Unity-based games and apps can gain full access to visionOS features such as passthrough, high-resolution rendering, and native gestures. These Unity apps are running natively on Apple Vision Pro and can sit side-by-side, rendered simultaneously, with other visionOS apps.
Virtual Reality

The Meta Quest line of headsets is designed to deliver wireless virtual and mixed reality experiences in an all-in-one standalone form factor. Built on Qualcomm Snapdragon XR processors and powered by Meta Horizon OS, Quest devices combine six degrees of freedom tracking, full-color passthrough, hand tracking, and controller-based interaction in a portable platform that can be used without external sensors or a tethered PC. The current lineup centers on Meta Quest 3 and Meta Quest 3S, both of which support mixed reality features and share Meta’s broader development ecosystem. For enterprise training, simulation, and immersive learning, the Meta Quest platform offers a practical balance of performance, flexibility, and deployment simplicity. Meta’s developer documentation describes Quest devices as wireless all-in-one mixed reality systems with shared platform features such as boundary setup, passthrough, and hand tracking, while PC VR support remains available through Meta Quest Link and Air Link for applications that require more graphics horsepower. That combination makes the Quest line well suited for scalable XR deployments across training, collaboration, guided workflows, and interactive visualization use cases.
Pico XR headsets are designed to deliver high-quality standalone virtual reality experiences for enterprise training, simulation, visualization, and immersive collaboration. Built on a lightweight, wireless form factor, Pico devices give organizations the flexibility to deploy room-scale and seated training applications without the complexity of tethered PC-based systems. High-resolution displays, inside-out tracking, and ergonomic industrial design make Pico headsets well suited for extended-use training scenarios where comfort, clarity, and ease of deployment matter.For businesses developing immersive learning and operational training solutions, Pico offers a practical platform for scalable XR deployment. Select Pico devices include enterprise-focused features such as device management support, business-ready security controls, and advanced interaction capabilities including controller-based input, hand tracking, and, on some models, eye tracking. Powered by Qualcomm Snapdragon XR processing and Android-based architecture, Pico headsets provide the performance needed to run interactive 3D training content while supporting the portability and streamlined setup that modern enterprise XR programs require.
The HTC Vive is one of the most popular virtual reality headset on the market today. A head-mounted device that provides virtual reality for the wearer. VR headsets are widely used with computer games but they are also used in other applications, including simulators and trainers.Developed by HTC and the Valve Corporation, the headset uses a technology called “room scale” tracking, that allows the user to move around in 3D space, much like they do the real world, and use motion-tracked handheld controllers to interact with the environment and objects within it.
The Vive contains a gyrosensor, an accelerometer and a laser position sensor, which work together to track the position of your head. The HTC Vive as well as the Oculus Rift are excellent hardware choices when it comes to simulation technology.

Valve Index is a premium PC-powered virtual reality headset built for high-fidelity immersive experiences. Designed to work with the SteamVR ecosystem, the Index combines high refresh rate displays, precision external tracking, off-ear speakers, and ergonomic adjustability to deliver a highly responsive and comfortable VR experience. Its modular hardware approach also allows users to pair the headset with Valve Index Controllers and SteamVR Base Stations for room-scale interaction and accurate motion tracking.For simulation, training, and advanced visualization applications, Valve Index remains a strong option where PC-connected performance and tracking precision are priorities. The headset supports 90Hz operation, 120Hz by default, and an experimental 144Hz mode, helping create smoother motion and improved optical comfort during demanding virtual experiences. Combined with SteamVR compatibility and support for a wide range of PC VR content, Valve Index continues to be a recognized platform for high-end immersive applications.
The Index includes a pair of 1440 x 1600-resolution RGB LCDs which provide for a combined resolution of 2880×1600, a wider angle field of view than its competitors and sharper text. Accompanying the headset are the SteamVR Knuckles handheld controllers, each with over 80 different sensors to monitor hand movement.
Apple Vision Pro is a high-end spatial computing device that blends digital content with the physical world through ultra-high-resolution micro-OLED displays, advanced sensors, and natural input driven by a user’s eyes, hands, and voice. Built on visionOS, Apple’s spatial operating system, Vision Pro is designed to support immersive visualization, guided workflows, collaboration, and next-generation training experiences in a polished, premium hardware ecosystem.For enterprise use, Apple positions Vision Pro as a platform for immersive training, design review, guided work, productivity, and customer engagement across industries such as manufacturing and healthcare. Its hardware and software stack supports room mapping, hand tracking, and business-focused spatial experiences, while Apple’s enterprise tools and APIs give organizations a path to deploy custom applications and managed workflows at scale.
Samsung Galaxy XR is a standalone mixed reality headset built on the new Android XR platform and designed to blend immersive digital content with the physical world. Featuring dual 4K micro-OLED displays, built-in spatial audio, eye, hand, and voice input, and an ergonomic lightweight design, Galaxy XR is positioned as a premium device for entertainment, productivity, and next-generation spatial computing experiences. Samsung also emphasizes contextual AI through Google Gemini integration, allowing users to interact with apps, content, and their surroundings in a more natural and conversational way.For enterprise and professional use, Samsung presents Galaxy XR as a platform for immersive collaboration, large virtual workspaces, and business productivity. Its Android XR foundation supports a broad app ecosystem, while features such as hand tracking, depth sensing, and managed business deployment make it relevant for training, visualization, guided workflows, and collaborative work across distributed teams. The standalone design also helps reduce setup friction by eliminating the need for a tethered PC, making Galaxy XR a flexible option for organizations exploring scalable XR deployment.Augmented Reality
The Microsoft HoloLens is a self-contained, wearable holographic computer, that layers digital content on top of reality, providing augmented reality. Featuring a voice-controlled PC, the headset allows users to easily interact with, see and hear holograms that are displayed via high-definition lenses and spatial sound technology, contextually in the real-world. Providing or mixed, or augmented, reality-based simulations allows users to have shared virtual experiences together in the real-world, similar to the way they have real experiences, allowing us to create and simulate any environment or object for users to collectively experience and interact with.
Apple Vision Pro, the groundbreaking mixed reality headset, offers an unparalleled immersive experience. With ultra-high-resolution displays and real-time Mixed Reality view powered by the R1 chip, it revolutionizes gaming, entertainment, productivity, professional applications, and scenario-based training simulations.
Experience realistic scenarios with Apple Vision Pro’s advanced features. Utilize eye tracking and hand tracking to interact with virtual environments and objects by simply looking and using hand gestures to interact, enhancing the training experience. Spatial Audio creates an immersive soundscape, providing a realistic environment for training simulations.
Versatile for gaming, entertainment, productivity, professional use, and scenario-based training, Apple Vision Pro merges power, innovation, and convenience, transforming technology engagement and training methodologies.
Additional features: dual micro-OLED displays, Apple M2 chip, 10 cameras for accurate tracking, built-in microphone, speaker system, and up to 3 hours of battery life.

Virtual, augmented, and mixed reality are known collectively as extended reality or XR. OpenXR is an open-source API (application programming interface) developed by the Khronos group to enable developers to build applications that work across various virtual and augmented reality platforms. The Microsoft HoloLens 2, HTC Vive, and Meta Quest 2 headsets are some of the best-known OpenXR platforms.

Mixed Reality Toolkit (MRTK) is an open-source project used for sharing UI controls and other essential building blocks for the accelerated development of Mixed Reality Experiences in Unity. Driven by Microsoft, MRTK works across a wide variety of platforms, including the Microsoft HoloLens, Windows Mixed Reality headsets, and OpenVR headsets.
Eye & Hand Tracking
Meta Quest hand tracking enables you to use your hands in place of Touch controllers. Inside-out cameras track the headset’s motion relative to the environment detecting the position and orientation of hands and fingers. Computer vision algorithms then track and analyze the movement in real time, bringing them into the VR space to navigate within select applications and websites.
Meta Quest hand tracking may also be done independently of the headset using the three built-in sensors on the quest controllers. 360-degree range of motion, TruTouch haptic feedback, and precision pinch make movement intuitive and precise while feeling more realistic when interacting with 3D objects.
Microsoft’s HoloLens uses hand tracking to interact and work with holograms in an augmented reality environment. Air Tap, Touch, and Hand Ray gestures allow users to reach for, select and reposition AR UI elements, close-up and far away, with pinpoint accuracy.
The Leap Motion controller is a small device that uses two IR cameras and three infrared LEDs to observes a hemispherical area in front of the user. The Leap Motion software synthesizes the 3D position data of the user’s hands so that they can be rendered in real-time in the virtual world, and the motions and actions of the user’s real hands can be calculated, tracked and used as user input. The Leap Motion controller literally lets you “reach out and swipe, grab, pinch, or punch your way through the digital world”.
The Meta Quest Pro uses eye tracking and Natural Facial Expressions to enhance users’ avatars with lifelike real-time movement and expression. Using ten high-resolution depth sensors, five external and five internal, the Mata Quest Pro analyzes infrared images of your eyes, allowing you to engage with virtual content based on where you’re looking.
Using extended eye tracking API, Microsoft’s HoloLens 2 provides information about where the user is looking in real time. By tracking individual eye gaze vectors, the device allows users to work with far away UI elements such as information cards and tooltips with the ability to set eye tracking framerates to 30, 60, or 90 frames per second.
A device or computer equipped with an eye tracker “knows” what a user is looking at. This makes it possible for users to interact with e.g. computers using their eyes.
Artificial Intelligence
GPT Models mark a transformative leap in AI, bringing automation and advanced language processing to the forefront of training and simulation. Their recent emergence has made it possible to integrate conversational AI that feels natural and responsive, allowing simulations to move beyond static prompts and pre-set responses. With GPT Models, automated dialogue and decision-making support become dynamic, providing real-time, context-aware interactions. This technology is crucial for creating simulations that mirror real-world scenarios and adapt to user actions, enhancing both realism and engagement.The automation offered by GPT Models doesn’t just improve interactivity; it streamlines complex processes by allowing AI-driven characters to guide users, answer questions, and provide feedback on demand. By incorporating GPT technology, simulations become more scalable and responsive, reducing the need for manual oversight while ensuring consistent, high-quality user experiences. This evolution in automated, conversational AI is a game-changer for delivering immersive, hands-on training that’s both efficient and impactful, helping to meet the rising demand for intelligent, adaptable training solutions.
AI Automation streamlines business operations by taking on complex, repetitive tasks, freeing teams to focus on impactful, strategic goals. From optimizing workflows to analyzing data in real time, AI enables greater accuracy, scalability, and efficiency across all areas of an organization. By integrating intelligent automation, businesses can respond swiftly to changing demands and achieve consistent, high-quality outcomes.In training environments, AI-driven automation enriches user experiences, creating adaptive, engaging scenarios that support skill-building without constant manual oversight.
This forward-compatible approach allows ForgeFX to deliver cutting-edge, dynamic voice experiences that continually improve in quality and realism. As AI voice models advance, simulations become even more lifelike and responsive, ensuring users receive the highest-quality guidance tailored to evolving industry standards and communication styles. This adaptability ensures that ForgeFX remains at the forefront of immersive, voice-enabled training, maximizing both engagement and long-term value for clients.
With the forward-compatible nature of hosted AI models, computer vision technology continually improves in accuracy, responsiveness, and adaptability as updates are released. This ensures training environments remain cutting-edge, seamlessly aligning with the latest advancements in visual recognition. AI computer vision brings a new level of realism and interactivity to training, empowering users to build practical skills in settings that closely simulate real-world scenarios.
Artificial Intelligence Subject Matter Experts bring specialized expertise to training environments with features like human-like AI voices, 3D avatars, and access to an extensive knowledge base, including equipment manuals, documentation, and interactive scenario data. These virtual experts can be trained with resources such as historical maintenance logs and industry-specific terminology, allowing them to provide realistic, context-aware guidance. With voice command functionality, users can interact naturally with AI SMEs, who respond seamlessly to technical jargon and task-specific instructions, simulating real-world interactions.AI SMEs deliver adaptive, on-demand support, guiding users through troubleshooting, operation, and maintenance tasks with precision. Whether offering step-by-step guidance, responding to safety alerts, or assisting with critical procedures, these AI-driven experts create an immersive, hands-on experience that builds confidence and strengthens skills across diverse scenarios.
AI SMEs deliver adaptive, on-demand support, guiding users through troubleshooting, operation, and maintenance tasks with precision. Whether offering step-by-step guidance, responding to safety alerts, or assisting with critical procedures, these AI-driven experts create an immersive, hands-on experience that builds confidence and strengthens skills across diverse scenarios.
Machine Learning
Critical in modern enterprise, machine learning is used to predict business operational and customer behavioral patterns, such as what products consumers are most likely to buy and what media they are most likely to watch. Other practical ML applications include self-driving cars, fraud detection, email filtering, speech recognition, malware threat detection, and business process automation.
Practical applications include facial and image recognition, self-driving cars and other autonomous vehicles, robotics, medical image analysis, recommender systems, brain–computer interfaces, natural language processing, and text analytics.
In computer vision, attention models study scenes with an intense focus on a specific point, followed by an expanded focus on the entire scene. Similarly, in neural machine translation (NMT), the meaning of a sentence is derived by mapping the statistical properties of individual words, giving a general sense rather than a specific translation.
Devices
When producing training simulators, you want to be able to reach the widest possible audience in order to train the most people as possible. One of the best ways to ensure this wide distribution is to deploy your application on as many devices and platforms as you can. ForgeFX produces simulators for just about every device, from desktops and laptops, to mobile devices, to wearable AR and VR devices. We author content once and deploy applications on the widest possible array of devices.

Developing simulators that run on desktop computers is our default deployment, and targeted end-user, platform. We do all of our application development on these devices, so by default all of our simulators run on desktop machines, specifically Windows, Mac and Linux machines.
Desktop computers offer the most horsepower and accessibility for integrating real-world original equipment manufacturer (OEM) controls, as well as additional peripherals like VR/AR devices. Gone are the days of training simulators requiring racks of computers to function. Today’s off-the-shelf desktop computers, that are intended for video game use, are more than capable of running our high-quality graphics simulation-based trainers, delivering highly-immersive and engaging virtual experiences.

Second only to desktop computer-based simulators, laptop-based simulations are the most popular way of deploying training simulation software to users. The more portable a training simulation’s hardware platform is, the more likely it will reach a greater number of trainees. Today’s laptop computers are more powerful than ever, with high-performance graphics cards and processors, built for video gamers, but also well-suited for interactive training simulators.Laptop-based training simulators are capable of being connected to real-world equipment controls and additional peripherals, just like desktop computer-based simulators, but also have the option of being taken into the field or classroom setting, to provide just-in-time training.

When it comes to deploying portable training simulators, tablet-based simulators lead the pack. Tablet-based computers provide for inexpensive, lightweight, and highly-customize simulation solutions. Since there are no physical buttons to integrate with the application, all functionality can and must be simulated virtually by software. This allows tablet-based simulators to easily simulate any piece of equipment, and switch from one, to another, to another in a seamless fashion allowing for easily accessible training.Tablet-based training simulators can provide beginner and intermediate level scenarios to help improve operational skills, as well as a controls familiarization where students may practice skills as directed or of their own choosing.

There may be no more technology device that has changed our society more than the mobile phone. Today’s mobile phones are nothing short of pocket-sized, internet-connected, super-computers. While the screens may be small and the processing power limited, with more than a billion of these devices in the world, capable of downloading and running highly-effective simulation-based training content, the smart phone is an excellent platform to deploy training simulators on.Manufacturers like Apple and Google put a tremendous effort towards getting their phones into the hands of million of people every year. By deploying your training simulation software on these popular devices, you are guaranteeing that your content will be capable of reaching the widest possible audience. Applications deployed on mobile devices are perfect for micro-learning applications, where users can download a specific procedure or scenario in real-time as they need it.

Virtual Reality (VR) and Augmented Reality (AR) devices are the latest and greatest simulation technology advances to grace the training simulation world. The past few years has seen the release of consumer-off-the-shelf (COTS) VR and AR devices that allow users to become fully immersed in content in a way traditional screen-based content ever could. VR-based training simulators place the user in the middle of the virtual world, where they are free to look and move around, and interact with the virtual world similar to the way they interact with the real-world. AR devices enable users to augmented their view of the real-world with interactive digital content, and share this view with others who are in the same room with them, or on the other end of the world.
Developing Your Project
Regardless of what technology you’re looking to support, we can develop a custom training simulation application for you, to run on any device or platform. We encourage you to contact ForgeFX to discuss how we can create the perfect simulation-based training solution for your organization. We’ll talk with you about your specific requirements, and based on these we’ll work with you to arrive at a detailed proposal to develop and deliver your project. If you need a ballpark price, we can often provide that during initial phone conversations. If you require a more precise estimate and have a detailed project specification in hand, we can typically turn around a firm bid rapidly.