Supported Simulation Technology
Game Engines
Unity is a cross-platform game engine developed by Unity Technologies that is primarily used to develop video games and simulations for computers, consoles and mobile devices.
ForgeFX relies heavily on the Unity game engine to produce simulation-based training products for our clients, and has been featured online by Unity in their Made with Unity Showcase, exhibited at the Unity AR/VR Vision Summit, and have had products we’ve developed for our clients featured in the Unity3D Showcase.
Platforms
By leveraging game engines to develop and deploy our training simulators, ForgeFX is able to support dozens of hardware and operating system platforms. Embracing the ‘author once, deploy anywhere’ strategy, ForgeFX s able to guarantee that our clients software will run on any number of current and popular computer platforms, as well as ensure that it will continue to run on future platforms as well.
Windows is Microsoft’s flagship operating system, and for many years was the de facto standard for all home and business computers. Microsoft Windows is the most popular end-user operating system, and all of our training simulators run on Microsoft Windows by default, as that’s the platform we do the majority of our development on.
Technically Windows is a metafamily of graphical operating systems. It consists of several families of operating systems, each of which cater to a certain sector of the computing industry with the operating system typically associated with IBM-based PC compatible architecture.
iOS is a mobile operating system developed by Apple to run exclusively on its own hardware, like iPhones and iPads. The iOS user interface and input system is based upon direct manipulation, using multi-touch gestures.
Second only to Android, iOS is the most popular operating system in the world. ForgeFX has developed a number of iOS-based training simulators, including the JLG Equipment Simulator, developed for JLG Industries.
Android is a mobile operating system developed by Google, designed for touchscreen mobile devices such as smartphones and tablets.
Android’s user interface is mainly based on direct manipulation, using touch gestures that loosely correspond to real-world actions, such as swiping, tapping and pinching, to manipulate on-screen objects, along with a virtual keyboard for text input. In addition to touchscreen devices, Google developed Android TV for televisions, Android Auto for cars, and Android Wear for wrist watches, each with a specialized user interface.
WebGL (Web Graphics Library) is a JavaScript-based API for rendering 3D graphics within any compatible web browser without the use of plug-ins. WebGL is integrated into all the web standards of the browser allowing for graphics processing unit (GPU) accelerated usage of physics and image processing. WebGL programs consist of control code written in JavaScript and shader code that is written in OpenGL Shading Language (GLSL), a language similar to the C++ programming language, and is executed on a computer’s graphics processing unit (GPU).
VisionOS is the name of Apple’s newest operating system that powers the company’s soon-to-launch augmented reality headset. It is the first operating system built from the ground up for spatial computing. VisionOS makes use of the Vision Pro’s wide array of cameras to constantly blend the virtual and real world, delivering a stable and constant picture with floating UI elements that you can interact with. Use visionOS together with familiar tools and technologies to build immersive apps and games for spatial computing.
Apple has partnered with Unity to bring Unity apps to the new Apple Vision Pro. Popular Unity-based games and apps can gain full access to visionOS features such as passthrough, high-resolution rendering, and native gestures. These Unity apps are running natively on Apple Vision Pro and can sit side-by-side, rendered simultaneously, with other visionOS apps.
Virtual Reality
The Meta Quest Pro is a virtual and mixed-reality headset designed by Reality Labs with developers and business consumers in mind. An open periphery, with an optional “black-out” accessory, lets you see the real world while interacting with virtual 3D objects. Internal headset and Touch controller sensor arrays provide advanced hand and eye tracking for realistic range of motion and avatar facial expression. Microsoft integration allows users to stream Windows to their headsets via a cloud desktop, use Microsoft productivity applications away from their monitors, and join Teams meetings with video or as an avatar from a Horizon Workroom environment. Powered with the Qualcomm Snapdragon XR2+ processor and an Android-based operating system, the Meta Quest Pro is optimized to run at 50% less power and with better thermal dissipation than its predecessor, the Quest 2. Thin pancake optic technology, high-resolution outward-facing cameras, and a quantum dot LCD display give users a sharp, full–color mixed reality experience in a sleek, ergonomic design.
The HTC Vive is one of the most popular virtual reality headset on the market today. A head-mounted device that provides virtual reality for the wearer. VR headsets are widely used with computer games but they are also used in other applications, including simulators and trainers.Developed by HTC and the Valve Corporation, the headset uses a technology called “room scale” tracking, that allows the user to move around in 3D space, much like they do the real world, and use motion-tracked handheld controllers to interact with the environment and objects within it.
The Vive contains a gyrosensor, an accelerometer and a laser position sensor, which work together to track the position of your head. The HTC Vive as well as the Oculus Rift are excellent hardware choices when it comes to simulation technology.
The Valve Index is a virtual reality headset developed by Valve Corporation, an American video game developer, publisher and digital distribution company. Valve is the developer of the software distribution platform Steam and popular titles like Half-Life and Counter-Strike.
The Index includes a pair of 1440 x 1600-resolution RGB LCDs which provide for a combined resolution of 2880×1600, a wider angle field of view than its competitors and sharper text. Accompanying the headset are the SteamVR Knuckles handheld controllers, each with over 80 different sensors to monitor hand movement.
Augmented Reality
Apple Vision Pro, the groundbreaking mixed reality headset, offers an unparalleled immersive experience. With ultra-high-resolution displays and real-time Mixed Reality view powered by the R1 chip, it revolutionizes gaming, entertainment, productivity, professional applications, and scenario-based training simulations.
Experience realistic scenarios with Apple Vision Pro’s advanced features. Utilize eye tracking and hand tracking to interact with virtual environments and objects by simply looking and using hand gestures to interact, enhancing the training experience. Spatial Audio creates an immersive soundscape, providing a realistic environment for training simulations.
Versatile for gaming, entertainment, productivity, professional use, and scenario-based training, Apple Vision Pro merges power, innovation, and convenience, transforming technology engagement and training methodologies.
Additional features: dual micro-OLED displays, Apple M2 chip, 10 cameras for accurate tracking, built-in microphone, speaker system, and up to 3 hours of battery life.
Virtual, augmented, and mixed reality are known collectively as extended reality or XR. OpenXR is an open-source API (application programming interface) developed by the Khronos group to enable developers to build applications that work across various virtual and augmented reality platforms. The Microsoft HoloLens 2, HTC Vive, and Meta Quest 2 headsets are some of the best-known OpenXR platforms.
Mixed Reality Toolkit (MRTK) is an open-source project used for sharing UI controls and other essential building blocks for the accelerated development of Mixed Reality Experiences in Unity. Driven by Microsoft, MRTK works across a wide variety of platforms, including the Microsoft HoloLens, Windows Mixed Reality headsets, and OpenVR headsets.
Eye & Hand Tracking
Meta Quest Pro hand tracking enables you to use your hands in place of Touch controllers. Inside-out cameras track the headset’s motion relative to the environment detecting the position and orientation of hands and fingers. Computer vision algorithms then track and analyze the movement in real time, bringing them into the VR space to navigate within select applications and websites.
Meta Quest hand tracking may also be done independently of the headset using the three built-in sensors on the quest controllers. 360-degree range of motion, TruTouch haptic feedback, and precision pinch make movement intuitive and precise while feeling more realistic when interacting with 3D objects.
Microsoft’s HoloLens uses hand tracking to interact and work with holograms in an augmented reality environment. Air Tap, Touch, and Hand Ray gestures allow users to reach for, select and reposition AR UI elements, close-up and far away, with pinpoint accuracy.
The Leap Motion controller is a small device that uses two IR cameras and three infrared LEDs to observes a hemispherical area in front of the user. The Leap Motion software synthesizes the 3D position data of the user’s hands so that they can be rendered in real-time in the virtual world, and the motions and actions of the user’s real hands can be calculated, tracked and used as user input. The Leap Motion controller literally lets you “reach out and swipe, grab, pinch, or punch your way through the digital world”.
The Meta Quest Pro uses eye tracking and Natural Facial Expressions to enhance users’ avatars with lifelike real-time movement and expression. Using ten high-resolution depth sensors, five external and five internal, the Mata Quest Pro analyzes infrared images of your eyes, allowing you to engage with virtual content based on where you’re looking.
Using extended eye tracking API, Microsoft’s HoloLens 2 provides information about where the user is looking in real time. By tracking individual eye gaze vectors, the device allows users to work with far away UI elements such as information cards and tooltips with the ability to set eye tracking framerates to 30, 60, or 90 frames per second.
A device or computer equipped with an eye tracker “knows” what a user is looking at. This makes it possible for users to interact with e.g. computers using their eyes.
Artificial Intelligence
The automation offered by GPT Models doesn’t just improve interactivity; it streamlines complex processes by allowing AI-driven characters to guide users, answer questions, and provide feedback on demand. By incorporating GPT technology, simulations become more scalable and responsive, reducing the need for manual oversight while ensuring consistent, high-quality user experiences. This evolution in automated, conversational AI is a game-changer for delivering immersive, hands-on training that’s both efficient and impactful, helping to meet the rising demand for intelligent, adaptable training solutions.
In training environments, AI-driven automation enriches user experiences, creating adaptive, engaging scenarios that support skill-building without constant manual oversight.
This forward-compatible approach allows ForgeFX to deliver cutting-edge, dynamic voice experiences that continually improve in quality and realism. As AI voice models advance, simulations become even more lifelike and responsive, ensuring users receive the highest-quality guidance tailored to evolving industry standards and communication styles. This adaptability ensures that ForgeFX remains at the forefront of immersive, voice-enabled training, maximizing both engagement and long-term value for clients.
With the forward-compatible nature of hosted AI models, computer vision technology continually improves in accuracy, responsiveness, and adaptability as updates are released. This ensures training environments remain cutting-edge, seamlessly aligning with the latest advancements in visual recognition. AI computer vision brings a new level of realism and interactivity to training, empowering users to build practical skills in settings that closely simulate real-world scenarios.
AI SMEs deliver adaptive, on-demand support, guiding users through troubleshooting, operation, and maintenance tasks with precision. Whether offering step-by-step guidance, responding to safety alerts, or assisting with critical procedures, these AI-driven experts create an immersive, hands-on experience that builds confidence and strengthens skills across diverse scenarios.
AI SMEs deliver adaptive, on-demand support, guiding users through troubleshooting, operation, and maintenance tasks with precision. Whether offering step-by-step guidance, responding to safety alerts, or assisting with critical procedures, these AI-driven experts create an immersive, hands-on experience that builds confidence and strengthens skills across diverse scenarios.
Machine Learning
Critical in modern enterprise, machine learning is used to predict business operational and customer behavioral patterns, such as what products consumers are most likely to buy and what media they are most likely to watch. Other practical ML applications include self-driving cars, fraud detection, email filtering, speech recognition, malware threat detection, and business process automation.
Practical applications include facial and image recognition, self-driving cars and other autonomous vehicles, robotics, medical image analysis, recommender systems, brain–computer interfaces, natural language processing, and text analytics.
In computer vision, attention models study scenes with an intense focus on a specific point, followed by an expanded focus on the entire scene. Similarly, in neural machine translation (NMT), the meaning of a sentence is derived by mapping the statistical properties of individual words, giving a general sense rather than a specific translation.
Devices
When producing training simulators, you want to be able to reach the widest possible audience in order to train the most people as possible. One of the best ways to ensure this wide distribution is to deploy your application on as many devices and platforms as you can. ForgeFX produces simulators for just about every device, from desktops and laptops, to mobile devices, to wearable AR and VR devices. We author content once and deploy applications on the widest possible array of devices.
Developing simulators that run on desktop computers is our default deployment, and targeted end-user, platform. We do all of our application development on these devices, so by default all of our simulators run on desktop machines, specifically Windows, Mac and Linux machines.
Desktop computers offer the most horsepower and accessibility for integrating real-world original equipment manufacturer (OEM) controls, as well as additional peripherals like VR/AR devices. Gone are the days of training simulators requiring racks of computers to function. Today’s off-the-shelf desktop computers, that are intended for video game use, are more than capable of running our high-quality graphics simulation-based trainers, delivering highly-immersive and engaging virtual experiences.
Second only to desktop computer-based simulators, laptop-based simulations are the most popular way of deploying training simulation software to users. The more portable a training simulation’s hardware platform is, the more likely it will reach a greater number of trainees. Today’s laptop computers are more powerful than ever, with high-performance graphics cards and processors, built for video gamers, but also well-suited for interactive training simulators.Laptop-based training simulators are capable of being connected to real-world equipment controls and additional peripherals, just like desktop computer-based simulators, but also have the option of being taken into the field or classroom setting, to provide just-in-time training.
When it comes to deploying portable training simulators, tablet-based simulators lead the pack. Tablet-based computers provide for inexpensive, lightweight, and highly-customize simulation solutions. Since there are no physical buttons to integrate with the application, all functionality can and must be simulated virtually by software. This allows tablet-based simulators to easily simulate any piece of equipment, and switch from one, to another, to another in a seamless fashion allowing for easily accessible training.Tablet-based training simulators can provide beginner and intermediate level scenarios to help improve operational skills, as well as a controls familiarization where students may practice skills as directed or of their own choosing.
There may be no more technology device that has changed our society more than the mobile phone. Today’s mobile phones are nothing short of pocket-sized, internet-connected, super-computers. While the screens may be small and the processing power limited, with more than a billion of these devices in the world, capable of downloading and running highly-effective simulation-based training content, the smart phone is an excellent platform to deploy training simulators on.Manufacturers like Apple and Google put a tremendous effort towards getting their phones into the hands of million of people every year. By deploying your training simulation software on these popular devices, you are guaranteeing that your content will be capable of reaching the widest possible audience. Applications deployed on mobile devices are perfect for micro-learning applications, where users can download a specific procedure or scenario in real-time as they need it.
Virtual Reality (VR) and Augmented Reality (AR) devices are the latest and greatest simulation technology advances to grace the training simulation world. The past few years has seen the release of consumer-off-the-shelf (COTS) VR and AR devices that allow users to become fully immersed in content in a way traditional screen-based content ever could. VR-based training simulators place the user in the middle of the virtual world, where they are free to look and move around, and interact with the virtual world similar to the way they interact with the real-world. AR devices enable users to augmented their view of the real-world with interactive digital content, and share this view with others who are in the same room with them, or on the other end of the world.
Developing Your Project
Regardless of what technology you’re looking to support, we can develop a custom training simulation application for you, to run on any device or platform. We encourage you to contact ForgeFX to discuss how we can create the perfect simulation-based training solution for your organization. We’ll talk with you about your specific requirements, and based on these we’ll work with you to arrive at a detailed proposal to develop and deliver your project. If you need a ballpark price, we can often provide that during initial phone conversations. If you require a more precise estimate and have a detailed project specification in hand, we can typically turn around a firm bid rapidly.