Supported Simulation Technology

ForgeFX Simulations is a team of experts with the tools required to produce high-quality enterprise grade training simulations. We use off-the-shelf middleware development tools and technology to produce our simulation-based training products. Leveraging industry-standard software development tools which are popular within the game and simulation development communities, we are able to offer our clients a strategic advantages when it comes to cost of production and ease of simulation deployment.

Game Engines

ForgeFX uses off-the-shelf middleware video game engine development tools to produce our products. We do not use proprietary, self-rolled, software for which finding additional developers can be difficult. ForgeFX uses industry-standard development tools which are popular within game and simulation development communities, allowing us to produce simulations for virtually any device or operating system. In addition to all of the advantages this presents to ForgeFX (large developer pool to draw from, large support community, pre-built components that save time, etc.) there are significant advantages to clients. Clients own all of the source code and assets that we create and there are no license fees or per-seat costs associated with our simulators.
Simulation Technology, Unity Game Engine Development
Unity is a cross-platform game engine developed by Unity Technologies that is primarily used to develop video games and simulations for computers, consoles and mobile devices.
ForgeFX relies heavily on the Unity game engine to produce simulation-based training products for our clients, and has been featured online by Unity in their Made with Unity Showcase, exhibited at the Unity AR/VR Vision Summit, and have had products we’ve developed for our clients featured in the Unity3D Showcase.


Platforms

By leveraging game engines to develop and deploy our training simulators, ForgeFX is able to support dozens of hardware and operating system platforms. Embracing the ‘author once, deploy anywhere’ strategy, ForgeFX s able to guarantee that our clients software will run on any number of current and popular computer platforms, as well as ensure that it will continue to run on future platforms as well.

Microsoft Windows
Windows is Microsoft’s flagship operating system, and for many years was the de facto standard for all home and business computers. Microsoft Windows is the most popular end-user operating system, and all of our training simulators run on Microsoft Windows by default, as that’s the platform we do the majority of our development on.

Technically Windows is a metafamily of graphical operating systems. It consists of several families of operating systems, each of which cater to a certain sector of the computing industry with the operating system typically associated with IBM-based PC compatible architecture.

iOS
iOS is a mobile operating system developed by Apple to run exclusively on its own hardware, like iPhones and iPads.
The iOS user interface and input system is based upon direct manipulation, using multi-touch gestures.

Second only to Android, iOS is the most popular operating system in the world. ForgeFX has developed a number of iOS-based training simulators, including the JLG Equipment Simulator, developed for JLG Industries. 

Android
Android is a mobile operating system developed by Google, designed for touchscreen mobile devices such as smartphones and tablets.
Android’s user interface is mainly based on direct manipulation, using touch gestures that loosely correspond to real-world actions, such as swiping, tapping and pinching, to manipulate on-screen objects, along with a virtual keyboard for text input. In addition to touchscreen devices, Google developed Android TV for televisions, Android Auto for cars, and Android Wear for wrist watches, each with a specialized user interface.

WebGL
WebGL (Web Graphics Library) is a JavaScript-based API for rendering 3D graphics within any compatible web browser without the use of plug-ins. WebGL is integrated into all the web standards of the browser allowing for graphics processing unit (GPU) accelerated usage of physics and image processing. WebGL programs consist of control code written in JavaScript and shader code that is written in OpenGL Shading Language (GLSL), a language similar to the C++ programming language, and is executed on a computer’s graphics processing unit (GPU).

VisionOS is the name of Apple’s newest operating system that powers the company’s soon-to-launch augmented reality headset. It is the first operating system built from the ground up for spatial computing. VisionOS makes use of the Vision Pro’s wide array of cameras to constantly blend the virtual and real world, delivering a stable and constant picture with floating UI elements that you can interact with. Use visionOS together with familiar tools and technologies to build immersive apps and games for spatial computing.

Apple has partnered with Unity to bring Unity apps to the new Apple Vision Pro. Popular Unity-based games and apps can gain full access to visionOS features such as passthrough, high-resolution rendering, and native gestures. These Unity apps are running natively on Apple Vision Pro and can sit side-by-side, rendered simultaneously, with other visionOS apps.


Virtual Reality

Consumer-based virtual reality (VR) is a boom for the training simulator industry. Advances in technology have led to the proliferation of affordable VR devices and computers capable of running them. VR represents a huge evolutionary step forward in computer graphics rendering and user input methods. In a nutshell, we are able to do things in VR that we simply cannot do with traditional screen-based simulators. In addition to the feeling of immersion and presence that VR gives the user, it also includes elements like stereoscopic 3D which allows for a virtual sense of depth perception, and positional tracking of the users body allowing the application to know where they are in the virtual space.
VR-based training simulators are a game-changer and have produced a significant shift in the world of training simulators, by allowing users to be fully engaged with the training content in a way never before possible.
Meta Quest Pro Mixed Reality Device
The Meta Quest Pro is a virtual and mixed-reality headset designed by Reality Labs with developers and business consumers in mind. An open periphery, with an optional “black-out” accessory, lets you see the real world while interacting with virtual 3D objects. Internal headset and Touch controller sensor arrays provide advanced hand and eye tracking for realistic range of motion and avatar facial expression. Microsoft integration allows users to stream Windows to their headsets via a cloud desktop, use Microsoft productivity applications away from their monitors, and join Teams meetings with video or as an avatar from a Horizon Workroom environment.
Meta Quest Pro Mixed Reality DevicePowered with the Qualcomm Snapdragon XR2+ processor and an Android-based operating system, the Meta Quest Pro is optimized to run at 50% less power and with better thermal dissipation than its predecessor, the Quest 2. Thin pancake optic technology, high-resolution outward-facing cameras, and a quantum dot LCD display give users a sharp, full–color mixed reality experience in a sleek, ergonomic design.

HTC Vive, Simulation Technology
The HTC Vive is one of the most popular virtual reality headset on the market today. A head-mounted device that provides virtual reality for the wearer. VR headsets are widely used with computer games but they are also used in other applications, including simulators and trainers.
Developed by HTC and the Valve Corporation, the headset uses a technology called “room scale” tracking, that allows the user to move around in 3D space, much like they do the real world, and use motion-tracked handheld controllers to interact with the environment and objects within it.

The Vive contains a gyrosensor, an accelerometer and a laser position sensor, which work together to track the position of your head. The HTC Vive as well as the Oculus Rift are excellent hardware choices when it comes to simulation technology.


The Valve Index is a virtual reality headset developed by Valve Corporation, an American video game developer, publisher and digital distribution company. Valve is the developer of the software distribution platform Steam and popular titles like Half-Life and Counter-Strike.

The Index includes a pair of 1440 x 1600-resolution RGB LCDs which provide for a combined resolution of 2880×1600, a wider angle field of view than its competitors and sharper text. Accompanying the headset are the SteamVR Knuckles handheld controllers, each with over 80 different sensors to monitor hand movement.

Meta Quest 2The Meta Quest 2 is Meta’s latest consumer focused VR headset. The Quest 2 is a virtual reality headset formerly known as the Oculus Quest and developed by the Reality Labs at Facebook inc.
The device is fully standalone, features two, six degrees of freedom (6DOF) controllers, and runs on a Qualcomm Snapdragon XR2 system on chip with 6GB of RAM. The Quest 2 is the most popular VR headset in the world, with over 15 million units sold.


Augmented Reality

Augmented reality (AR) consists of a view of a physical, real-world environment whose elements are augmented by computer-generated graphical data. It is related to a more general concept called computer-mediated reality, in which a view of reality is modified by a computer. Whereas virtual reality replaces the user’s view of the real-world with a simulated one, augmented reality enhances one’s current perception of reality with computer-generated content. Augmentation techniques are typically performed in real-time and in context with environmental elements, such as overlaying supplemental information over a live view of the real-world.
Microsoft HoloLensThe Microsoft HoloLens is a self-contained, wearable holographic computer, that layers digital content on top of reality, providing augmented reality. Featuring a voice-controlled PC, the headset allows users to easily interact with, see and hear holograms that are displayed via high-definition lenses and spatial sound technology, contextually in the real-world. Providing or mixed, or augmented, reality-based simulations allows users to have shared virtual experiences together in the real-world, similar to the way they have real experiences, allowing us to create and simulate any environment or object for users to collectively experience and interact with.

Apple Vision Pro, the groundbreaking mixed reality headset, offers an unparalleled immersive experience. With ultra-high-resolution displays and real-time Mixed Reality view powered by the R1 chip, it revolutionizes gaming, entertainment, productivity, professional applications, and scenario-based training simulations.

Experience realistic scenarios with Apple Vision Pro’s advanced features. Utilize eye tracking and hand tracking to interact with virtual environments and objects by simply looking and using hand gestures to interact, enhancing the training experience. Spatial Audio creates an immersive soundscape, providing a realistic environment for training simulations.

Versatile for gaming, entertainment, productivity, professional use, and scenario-based training, Apple Vision Pro merges power, innovation, and convenience, transforming technology engagement and training methodologies.

Additional features: dual micro-OLED displays, Apple M2 chip, 10 cameras for accurate tracking, built-in microphone, speaker system, and up to 3 hours of battery life.

OpenXR Open Source API
Virtual, augmented, and mixed reality are known collectively as extended reality or XR. OpenXR is an open-source API (application programming interface) developed by the Khronos group to enable developers to build applications that work across various virtual and augmented reality platforms. The Microsoft HoloLens 2, HTC Vive, and Meta Quest 2 headsets are some of the best-known OpenXR platforms.

Mixed Reality Tool Kit
Mixed Reality Toolkit (MRTK) is an open-source project used for sharing UI controls and other essential building blocks for the accelerated development of Mixed Reality Experiences in Unity. Driven by Microsoft, MRTK works across a wide variety of platforms, including the Microsoft HoloLens, Windows Mixed Reality headsets, and OpenVR headsets.


Eye & Hand Tracking

Eye and hand tracking technologies allow users to interact with computers through hand, finger and eye motions. Users are able to interact with computer-generated virtual elements just like they do real-world physical objects. Rather than having to move a cursor on top of something to select it, eye tracking allows users to simply look at an object to select it. Similarly, hand tracking allows digital elements to be interacted just like physical elements are, through manipulation by fingers and hands.
Meta Quest Pro Hand Tracking

Meta Quest Pro hand tracking enables you to use your hands in place of Touch controllers. Inside-out cameras track the headset’s motion relative to the environment detecting the position and orientation of hands and fingers. Computer vision algorithms then track and analyze the movement in real time, bringing them into the VR space to navigate within select applications and websites.

Meta Quest hand tracking may also be done independently of the headset using the three built-in sensors on the quest controllers. 360-degree range of motion, TruTouch haptic feedback, and precision pinch make movement intuitive and precise while feeling more realistic when interacting with 3D objects.

HoloLens Hand Tracking

Microsoft’s HoloLens uses hand tracking to interact and work with holograms in an augmented reality environment. Air Tap, Touch, and Hand Ray gestures allow users to reach for, select and reposition AR UI elements, close-up and far away, with pinpoint accuracy.

LeapMotion

The Leap Motion controller is a small device that uses two IR cameras and three infrared LEDs to observes a hemispherical area in front of the user. The Leap Motion software synthesizes the 3D position data of the user’s hands so that they can be rendered in real-time in the virtual world, and the motions and actions of the user’s real hands can be calculated, tracked and used as user input. The Leap Motion controller literally lets you “reach out and swipe, grab, pinch, or punch your way through the digital world”.

Meta Quest Pro Eye Tracking

The Meta Quest Pro uses eye tracking and Natural Facial Expressions to enhance users’ avatars with lifelike real-time movement and expression. Using ten high-resolution depth sensors, five external and five internal, the Mata Quest Pro analyzes infrared images of your eyes, allowing you to engage with virtual content based on where you’re looking.

HoloLens Eye Tracking

Using extended eye tracking API, Microsoft’s HoloLens 2 provides information about where the user is looking in real time. By tracking individual eye gaze vectors, the device allows users to work with far away UI elements such as information cards and tooltips with the ability to set eye tracking framerates to 30, 60, or 90 frames per second.

Tobii Eye TrackingTobii’s eye tracking technology includes a sensor that enables a device to know exactly where your eyes are focused. It determines your presence, attention, focus, drowsiness, consciousness or other mental states, and allows software to process and react to these states. Eye tracking is a technology that puts you in control of your device by using your eyes as you naturally would.

A device or computer equipped with an eye tracker “knows” what a user is looking at. This makes it possible for users to interact with e.g. computers using their eyes.


Artificial Intelligence

Artificial Intelligence refers to computer systems that are able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial intelligence is the simulation of human intelligence processes by computers.
GPT ModelsGPT Models mark a transformative leap in AI, bringing automation and advanced language processing to the forefront of training and simulation. Their recent emergence has made it possible to integrate conversational AI that feels natural and responsive, allowing simulations to move beyond static prompts and pre-set responses. With GPT Models, automated dialogue and decision-making support become dynamic, providing real-time, context-aware interactions. This technology is crucial for creating simulations that mirror real-world scenarios and adapt to user actions, enhancing both realism and engagement.
The automation offered by GPT Models doesn’t just improve interactivity; it streamlines complex processes by allowing AI-driven characters to guide users, answer questions, and provide feedback on demand. By incorporating GPT technology, simulations become more scalable and responsive, reducing the need for manual oversight while ensuring consistent, high-quality user experiences. This evolution in automated, conversational AI is a game-changer for delivering immersive, hands-on training that’s both efficient and impactful, helping to meet the rising demand for intelligent, adaptable training solutions.

AI AutomationAI Automation streamlines business operations by taking on complex, repetitive tasks, freeing teams to focus on impactful, strategic goals. From optimizing workflows to analyzing data in real time, AI enables greater accuracy, scalability, and efficiency across all areas of an organization. By integrating intelligent automation, businesses can respond swiftly to changing demands and achieve consistent, high-quality outcomes.
In training environments, AI-driven automation enriches user experiences, creating adaptive, engaging scenarios that support skill-building without constant manual oversight.

AI Voice Technologies offer a cost-effective, scalable alternative to traditional voice-over by integrating human-like, adaptive speech directly into training environments. With AI-driven voice models, ForgeFX simulations provide realistic guidance and instruction without the need for continuous voice-over recording, reducing costs while enhancing user engagement. This hosted AI voice technology evolves automatically as model improvements are released, meaning our simulations benefit from enhanced accuracy, natural tone, and contextual adaptability over time—without requiring software updates.
This forward-compatible approach allows ForgeFX to deliver cutting-edge, dynamic voice experiences that continually improve in quality and realism. As AI voice models advance, simulations become even more lifelike and responsive, ensuring users receive the highest-quality guidance tailored to evolving industry standards and communication styles. This adaptability ensures that ForgeFX remains at the forefront of immersive, voice-enabled training, maximizing both engagement and long-term value for clients.

AI Computer Vision enables advanced visual recognition and analysis within training environments, allowing simulations to dynamically interact with users and objects in real time. This technology enhances training by recognizing equipment, tracking movements, and assessing user actions, creating an immersive experience that adapts to each individual’s interactions. AI computer vision supports tasks such as assembly guidance, safety checks, and equipment diagnostics, providing instant feedback and significantly reducing manual intervention.
With the forward-compatible nature of hosted AI models, computer vision technology continually improves in accuracy, responsiveness, and adaptability as updates are released. This ensures training environments remain cutting-edge, seamlessly aligning with the latest advancements in visual recognition. AI computer vision brings a new level of realism and interactivity to training, empowering users to build practical skills in settings that closely simulate real-world scenarios.

AI Subject Matter ExpertsArtificial Intelligence Subject Matter Experts bring specialized expertise to training environments with features like human-like AI voices, 3D avatars, and access to an extensive knowledge base, including equipment manuals, documentation, and interactive scenario data. These virtual experts can be trained with resources such as historical maintenance logs and industry-specific terminology, allowing them to provide realistic, context-aware guidance. With voice command functionality, users can interact naturally with AI SMEs, who respond seamlessly to technical jargon and task-specific instructions, simulating real-world interactions.
AI SMEs deliver adaptive, on-demand support, guiding users through troubleshooting, operation, and maintenance tasks with precision. Whether offering step-by-step guidance, responding to safety alerts, or assisting with critical procedures, these AI-driven experts create an immersive, hands-on experience that builds confidence and strengthens skills across diverse scenarios.

Artificial Intelligence Subject Matter Experts bring specialized expertise to training environments with features like human-like AI voices, 3D avatars, and access to an extensive knowledge base, including equipment manuals, documentation, and interactive scenario data. These virtual experts can be trained with resources such as historical maintenance logs and industry-specific terminology, allowing them to provide realistic, context-aware guidance. With voice command functionality, users can interact naturally with AI SMEs, who respond seamlessly to technical jargon and task-specific instructions, simulating real-world interactions.
AI SMEs deliver adaptive, on-demand support, guiding users through troubleshooting, operation, and maintenance tasks with precision. Whether offering step-by-step guidance, responding to safety alerts, or assisting with critical procedures, these AI-driven experts create an immersive, hands-on experience that builds confidence and strengthens skills across diverse scenarios.


Machine Learning

Machine learning (ML) is a type of AI that imitates intelligent human behavior to improve performance on a set of tasks without being specifically programmed to do so. Using algorithms and data analysis, machine learning automates statistical model building to predict outcomes and make decisions.

Critical in modern enterprise, machine learning is used to predict business operational and customer behavioral patterns, such as what products consumers are most likely to buy and what media they are most likely to watch. Other practical ML applications include self-driving cars, fraud detection, email filtering, speech recognition, malware threat detection, and business process automation.

Reinforcement learning (RL) is a subset of machine learning that uses trial and error interactions to train an AI how to act in a given environment for maximum rewards. Deep reinforcement learning (DRL) adds layers of artificial neural networks to this framework enabling AI agents to map states and actions to their associated values and rewards with human-like intelligence. In this way, DRL allows for superhuman performance in multi-player games and enables AI to quickly achieve its full potential in a simulation environment. Real-world DRL applications include robotic controls, autonomous driving scenarios, and patient healthcare optimization.

Imitation learning (IL) uses algorithms to mimic human behavior in order to achieve a given task within a variable environment. Through the observation of expert demonstration or trajectories, a learning agent maps the optimal set of actions to achieve the desired outcome and then repeats those actions. The variable environment of the IL learning space enables the agent to make behavioral assumptions and autonomous decisions in order to account for the perceived variables. IL’s limited volume of input data requires less computing power, offering real-time perception and reaction making it ideal for AI technologies such as autonomous vehicle navigation, humanoid robotics, human-computer interaction, and gaming/ simulation development.
Curriculum learning (CL) trains a machine learning model to incrementally progress from easier to more complex tasks by imitating the human education model. Where machine learning algorithms use randomized independent training examples, curriculum learning is sequential, gradually increasing the complexity of its data samples to improve performance. CL is ideal for performing AI-driven tasks such as learning grammatical architectures and natural language processing.
A convolutional neural network (CNN) is a class of artificial neural networks used in deep learning to derive meaningful data from digital images and video. CNN architecture uses a combination of convolutional and pooling layers to identify people, faces, and objects. The convolutional layer of the network works to recognize an image in order to extract specific features. It then pools the features to flatten and reduce data size while dropping unwanted values, allowing the model to train faster. Then an activation layer trains the network using a cross-entropy loss function to calculate data accuracy and optimize performance.

Practical applications include facial and image recognition, self-driving cars and other autonomous vehicles, robotics, medical image analysis, recommender systems, brain–computer interfaces, natural language processing, and text analytics.

Attention modeling is a deep learning technique that uses neural networks to solve complex problems. Similar to how a human would approach problem-solving, attention models use neural networks to break tasks down into smaller areas of attention and then processes them sequentially.

In computer vision, attention models study scenes with an intense focus on a specific point, followed by an expanded focus on the entire scene. Similarly, in neural machine translation (NMT), the meaning of a sentence is derived by mapping the statistical properties of individual words, giving a general sense rather than a specific translation.


Devices

When producing training simulators, you want to be able to reach the widest possible audience in order to train the most people as possible. One of the best ways to ensure this wide distribution is to deploy your application on as many devices and platforms as you can. ForgeFX produces simulators for just about every device, from desktops and laptops, to mobile devices, to wearable AR and VR devices. We author content once and deploy applications on the widest possible array of devices.

Desktop Computer-Based Simulator
Developing simulators that run on desktop computers is our default deployment, and targeted end-user, platform. We do all of our application development on these devices, so by default all of our simulators run on desktop machines, specifically Windows, Mac and Linux machines.

Desktop computers offer the most horsepower and accessibility for integrating real-world original equipment manufacturer (OEM) controls, as well as additional peripherals like VR/AR devices. Gone are the days of training simulators requiring racks of computers to function. Today’s off-the-shelf desktop computers, that are intended for video game use, are more than capable of running our high-quality graphics simulation-based trainers, delivering highly-immersive and engaging virtual experiences.

Laptop-Based Training Simulators
Second only to desktop computer-based simulators, laptop-based simulations are the most popular way of deploying training simulation software to users. The more portable a training simulation’s hardware platform is, the more likely it will reach a greater number of trainees. Today’s laptop computers are more powerful than ever, with high-performance graphics cards and processors, built for video gamers, but also well-suited for interactive training simulators.
Laptop-based training simulators are capable of being connected to real-world equipment controls and additional peripherals, just like desktop computer-based simulators, but also have the option of being taken into the field or classroom setting, to provide just-in-time training.
Tablet-Based Training Simulators
When it comes to deploying portable training simulators, tablet-based simulators lead the pack. Tablet-based computers provide for inexpensive, lightweight, and highly-customize simulation solutions. Since there are no physical buttons to integrate with the application, all functionality can and must be simulated virtually by software. This allows tablet-based simulators to easily simulate any piece of equipment, and switch from one, to another, to another in a seamless fashion allowing for easily accessible training.
Tablet-based training simulators can provide beginner and intermediate level scenarios to help improve operational skills, as well as a controls familiarization where students may practice skills as directed or of their own choosing.
Mobile Phone Based Training Simulators
There may be no more technology device that has changed our society more than the mobile phone. Today’s mobile phones are nothing short of pocket-sized, internet-connected, super-computers. While the screens may be small and the processing power limited, with more than a billion of these devices in the world, capable of downloading and running highly-effective simulation-based training content, the smart phone is an excellent platform to deploy training simulators on.
Manufacturers like Apple and Google put a tremendous effort towards getting their phones into the hands of million of people every year. By deploying your training simulation software on these popular devices, you are guaranteeing that your content will be capable of reaching the widest possible audience. Applications deployed on mobile devices are perfect for micro-learning applications, where users can download a specific procedure or scenario in real-time as they need it.
Simulation Technology: VR/AR Simulation-Based Training Simulators
Virtual Reality (VR) and Augmented Reality (AR) devices are the latest and greatest simulation technology advances to grace the training simulation world. The past few years has seen the release of consumer-off-the-shelf (COTS) VR and AR devices that allow users to become fully immersed in content in a way traditional screen-based content ever could. VR-based training simulators place the user in the middle of the virtual world, where they are free to look and move around, and interact with the virtual world similar to the way they interact with the real-world. AR devices enable users to augmented their view of the real-world with interactive digital content, and share this view with others who are in the same room with them, or on the other end of the world.

Developing Your Project

Regardless of what technology you’re looking to support, we can develop a custom training simulation application for you, to run on any device or platform. We encourage you to contact ForgeFX to discuss how we can create the perfect simulation-based training solution for your organization. We’ll talk with you about your specific requirements, and based on these we’ll work with you to arrive at a detailed proposal to develop and deliver your project. If you need a ballpark price, we can often provide that during initial phone conversations. If you require a more precise estimate and have a detailed project specification in hand, we can typically turn around a firm bid rapidly.

Contact Us Now