Wednesday, April 23, 2025

Sample of Transitioning to a Low Code/No Code Development Environment for Less Technical Users

 Sample of Transitioning to a Low Code/No Code Development Environment for Less Technical Users Using Unreal Engine

This is a high-level view of a process that I designed to let intermediary researchers compose various scenarios without having to know the technical knowledge to code complex functionality.  It also supported the ability to create relatable applications for use cases that appealed to the user or audience to assist with better user experiences.

This same concept can be applied to many applications with the removal of complex code to allow less technical developers to easily develop content for their users removing development barriers and increasing usage.

This is an approach that I was able to take, and I understand that everyone's approach may vary.  The primary goal was to develop content that doesn't require someone to be a super coder to develop useful content for their user base.  It also helps to show how even the no code approach can even replace the need to be knowledgeable about blueprints as well.

Please view the images of my slide deck as well as a summary video on this page.





Thank you for viewing, and please feel free to leave a comment.  If you have any questions please reach out and contact me.

View the footer at the bottom of the page for additional links to other pages.

Reference Link:


Additional Content:

Tuesday, April 22, 2025

The Development of a Virtual Prototyping Review Process Using a Collaborative Immersive Approach to Enhance Processes, Awareness, and Delivery - Presented at NAFEMS 2024

 NAFEMS AMERICAS CONFERENCE

LOUISVILLE, KY

JULY. 09-11, 2024

The Development of a Virtual Prototyping Review Process Using a Collaborative Immersive Approach to Enhance Processes, Awareness, and Delivery

Granville, Alanzo D., PhD

Science Applications International Corporation (SAIC)

ABSTRACT

    Immersive technology has stimulated and propelled innovation throughout multiple industries as the technology continues to improve and open new doors to discovery. This has led to an attraction of immersive development including augmented, virtual, and mixed realities. However, if they are not effectively used or understood they may generate a lackluster experience within projects. These challenges can promote doubtfulness related to the technology’s capabilities and could deter future use. This leads to the primary question of how do we properly use an immersive technology approach to promote progression and generate positive benefits and outcomes for our daily work.

    Immersive technology has shown benefits, but its use and results can vary significantly among different applications. In the approach that will be presented in this paper, core concepts are introduced during development to assist in defining a repeatable approach to immersive technology integration that promotes a beneficial nature. Concepts include goal identification, approach evaluation, limitations, environment-user relationship, content presentation, user interactions, product delivery, and feedback capture are all critical components that will determine the successful integration and improvement of immersive experiences. This will serve as a guide to developing a virtual prototype review process using immersive technology and digital engineering. This process aims to provide ways to facilitate an initial connection between work efforts and the enhancement capabilities of immersive technology while maintaining a focus on how to integrate the core concepts into a project. It will demonstrate a proven process utilized by our One App application in past and on-going projects that has assisted in outlining how to establish a meaningful use case for immersive technology. The One App application boosters the capability to host multiple virtual experiences in a centralized location as well as providing a hardware/software agnostic solution to support multi-user interactions. It has allowed us to develop multiple experiences relating to the construction of facilities, vehicles, hardware, and other processes to support beneficial outcomes that can boost productivity and performance. Project evaluation criteria included purpose and goal identification, the ability to collaborate, awareness of users’ capabilities and resources combined with a virtual prototype review process to support a quick iterative approach and decision making.

    The purpose of this report is to demonstrate the utilization of the One App application through various examples and guidance to support the integration of an immersive virtual prototype review process. With the various applications and projects, we will show how the One App application is able to assist in understanding the capabilities of using immersive technology as an enhancement tool. This process will help to identify the requirements needed to solidify innovation and future use cases supporting exploration within a virtual medium for concepts including digital twin, environment design and creation, manufacturing, and more to support critical decision-making efforts utilizing a custom fast feedback loop for countless projects.

Keywords: Immersive, Immersive Technology, Virtual Reality, Augmented Reality, Mixed Reality Process Review, Enhancement Tool, Visualization, Enhancing Performance, Collaborative, Virtual

Citation: A. D., Granville, “The Development of a Virtual Prototyping Review Process Using a Collaborative Immersive Approach to Enhance Processes, Awareness, and Delivery,” In Proceedings of NAFEMS Americas Conference, Louisville, KY, July. 09-11, 2024.


Avoiding the Pitfalls and Promoting Success of using Visualization and Immersive Technology

Below is an image that focuses on how to correctly integrate visualization capabilities of immersive technology into your projects or daily work.  The talk itself goes over key things that need to be understood to make sure you are developing your immersive user experience for success and not just a "cool to have".  Emphasis is also placed on building success with immersive technology to drive more adoption through valued user experiences.  The public presentation and abstract are a part of the NAFEMS 2024 publications.


Thank you for viewing.  If there are any questions or comments feel free to reach out.


Monday, April 21, 2025

Developing Military UI Icons Using a FM 1-02.2 Frame Approach

 Developing  and Visualization of Military Symbols as UI and UX Icons Using a FM 1-02.2 Military Symbol Frame Approach


There are many military symbols that exist today and viewing and understanding those symbols can seem overwhelming at times.  To help simplify the usage of military symbols using frames, in reference to the Military Framing Guide FM 1-02.2 Military Symbols, I will demonstrate how I was able to use military symbol frames to construct UI components that can be placed on the user's UI or as an interactable object within our virtual world.  Below are some key concepts to be aware of when using military symbols.

  • There are four types of frame symbols that this effort will focus on and that includes Units, Equipment, Installations, and Activities
  • Three things to consider when choosing an appropriate symbol
    • Standard
    • Physical Domain
    • Status


As you see the results of the development of these UI and UX icons we focused more on standard identity with the addition of other visuals components.  A more detailed table can be found in FM1-02.2 Military Symbols.

Initially you may be wondering why is this guy wanting to visualize military symbols.  In a later project you will get a better picture of how this approach can be used to support mission planning, understanding the mission route, threat identification, providing a high level view of the threat neutralization process, military symbol familiarization and usage, and much more. With this content the user experience can be made as detailed as needed with emphasis on functionality and capabilities added to best support the experience or goal.  Sample images and video will be shown to demonstrate how I started the process to plan an entire mission route with the ability to be aware of friendly, hostile, neutral, and unknown entities within a virtual landscape.  The hope would be to use this as training assistance to promote readiness among our warfighters.  Being ready and aware has an enormous impact in whether a soldier returns from the battlefield and makes it home to his/her love ones.

It is also important to note that these created UI icons were used in a research study conducted by the Data and Analysis Center located in Huntsville AL, which already demonstrated one applicable use of the symbology with the addition of other functionality.  The initial use case explored defining a few standard frames to be representative of friendly, hostile, neutral, and unknown entities.

Standard Frames

Each frame was made using references from military symbology and using art tools to create the imagery to bring the icons into Unreal Engine.  Keep in mind that these symbols are adapted to present information to the user in an informative context and used in conjunction with the UI to effectively communicate that concept through the user experience.  The primary take away here is that I am able to make any type of representative imagery needed to effectively communicate the frame symbology.  The images below are what I labeled as standard-general frames to be able to reference the components when needed later.

Friendly

Hostile

Neutral

Unknown


Additional UI Component/Icons

The first step to creating the UI components that will be used to later enhance the user mission/route planning experience was getting the above imagery for the standard frames.  Next steps involved the addition of more visually appropriate icons to help the user comprehend information quickly when visualized in the virtual environment.  Below are some of the other icons used that varied in combination with the standard frames.

In these samples both blue and red forces, notably representing friendly and hostile forces, were used in conjunction with other icons that may represent locations of interest or other UI components.  UI components were created with reusability in mind and that allowed for added functionality to enable change between blue, red, or any other forces with setting initial pre-mission parameters as needed prior to execution.  It also supported the ability to easily migrate these components to any project; opening the door to provide support for other use cases or approaches as they relate to the usage of military symbology.
 


For communication of multiple units/equipment of a specific type values were generated indicating the total number of units/equipment.  If the icon appeared with no value it indicated one of that specific type.


I also added interactive components that indicated some type of actions visually.  They are a threat dome (center), hostile communication icon (top left), unknown/neutral communication (top right), hostile threat eliminated (bottom left), and friendly communication icon (bottom right).  These UI components can dynamically change during the mission simulation depending on how they were categorized as friendly, hostile, neutral, unknown, or if actions taken during the mission changed a UI component's current state.  Event triggers were also used with these interactive icons that demonstrated relevant animation within the virtual environment depicting blue/red force communication, blue/red force threat neutralization, blue/red force additional units spawned or detected, etc.



Creating the UI Component

Four initial key things to consider when creating any UI design or component are the following:
  • What is the purpose of the UI design/component?
  • Does the component properly represent or convey the proper messaging for the user? 
  • Is this a static or dynamic UI design/component?
  • How is the content going to be displayed on screen? 
These four key things are general, but they promote a good start to building out your UI components to support your UI design and user experience (UX).  They will give rise to and span across additional questions and comments that will lead to the refinement of your components to get to the desired result.  Gain an understanding of why the component is being created and making sure that it conveys the proper message that provides benefit as a result of the user seeing it.  Also narrowing down if it will be static or dynamic helps.  You will have to consider how to bring in uniform imagery that continues to fit your UI design but has the ability to be adjusted as needed.  Another important concept to apply is to avoid screen clutter and remember too much information isn't always good information.  Effectively using your screen space and presenting optimized information shown to best benefit the user at that moment can make the user experience much better.


In my case, I aim to optimize or promote efficiency.  I also wanted to develop or design a UI component that would be easy for me to change that presented a simple hierarchy that was able to support complex functionality beneath a good quality image.  That led to the dynamic reusable design seen above where it allowed for automated updating of the symbology based on triggered events during the mission simulation.  

Design of the UI Supporting the User Experience

The overall goal here was to develop the military symbology to support a virtual environment that looked at providing a user experience representative of the following key points.

  • Allowing a user to travel a mission path flying a pre-planned route
  • Provide the ability to gain situational awareness and readiness through using military related symbols
  • Providing a repeatable, modern, and dynamic platform or medium for military symbology familiarization
  • Develop a simulated environment that promoted a mission route planning activity and preparation
  • Develop an application that supports both desktop usage and immersive virtual reality (VR) headset approaches

In this design I did integrate additional functionality including a overhead minimap view, UAS/UAV integrated cameras with customized object targeting, event trigger volumes to promote mission activity, on screen text, etc. Also note that the initial imagery and video represent an initial development and deployment of the application supporting the CCDC Data and Analysis Center in Huntsville, AL.  Content was adjusted to meet their needs, and the level of fidelity shown here can increase with additional requirements and virtual environment integrations.  I will also demonstrate the same symbology in a different environment to allow you to see the ability to reuse the UI component as well as a change in the graphical and visual layout of the environment.

Mission Environment General View



Usage of real world terrain elevation data to represent the terrain, an unknown threat, and designated flight path and zones are shown in the image below.



Additional UI content was added to satisfy the requirements of initial context of the mission, as well as providing the ability to have an overhead view, UAS/UAV camera views, and distance from objects to promote situational awareness.



This is an example of a friendly blue forces mortar team identifying the unknown threat as a launcher and the effective range associated with the threat dome can be updated.  Notice the darker blue ring around the mortar team's icon indicating that they are communicating the information.  Compared to the icon representing two transport trucks not having that darker ring indicating that it is not communicating.



To summarize this content, I am providing a video showing clip of the mission simulation running as well as imagery and/or video of the military symbols being displayed in a different virtual environment with a change in environment graphical and visual fidelity.

Virtual Mission Demonstration

The concept of this virtual mission demonstration was to show the dynamics of configuring a mission planning environment with interactable components including blinking icons showing communication, target neutralized, dynamic UI components that avoids blocking the user's view, etc.  The final demonstration showed real world terrain taken out of New Mexico with virtual entities placed in specific locations with corresponding icons and event triggers as pilots navigate the real world terrain.  In the "Displaying the Military Symbology UI Components in Different Virtual Environments" section shows how easily the modular functionality can be applied to different projects with their standalone modular design.

Displaying the Military Symbology UI Components in Different Virtual Environments

This shows how easily the UI content can be moved into different projects or virtual environment maps.  In the images below you see that the UI symbol is changed for the helicopter to various military symbols.  I also integrated the ability to quickly change the mesh as well, which is represented by the image with the tank (bottom left).  There is also the ability to change the background of the icon, but for now I left it in a friendly themed UI component background. Additionally, I brought in a random image that also displayed easily and correctly on the icon (top right).


The below images show a completely different map/level/environment that the models were easily placed into with the current functionality.





If you have any questions or need any assistance please feel free to reach out.  Thank you and I hope this gave some insight on how to generate a user experience using UI component that reflect designing military symbology. 


Additional Content: 




Monday, August 14, 2017

Developing a Rendering Engine for Desktop, Mixed Reality Simulations, and Training

3D Graphics and Geometry

This small section is representative of me working in a multi-operating system (Linux and Windows) environment and gaining an understanding of the GPU, 3D geometry creation, OpenGL, C++, and the usage of other libraries and APIs to eventually generate a mixed reality environment for training.  As shown in the images below models were built vertex by vertex as well as efficiently managing GPU memory usage due to many of these models containing millions and millions of vertices within their geometrical structure.  

It served as a basis for me to develop my own rendering engine using OpenGL to support training and simulation.  Understanding GPU operations, Model-Based System Engineering (MBSE) architectures, OpenGL, C++, agile methodologies, and object-oriented programming (OOP) approaches allowed for me to develop an engine that could take (import) in both simple and complex geometrical model data of multiple formats.  I was also able to incorporate subsystems such as sound, texturing/materials, lighting, and events.  That led to the development of my interdisciplinary dissertation research topic the Construction of 3D cardiopulmonary resuscitation emergency scenarios for first responder pre-nursing training on stereoscopic display systems


That allowed me to start developing various environment configurations to decide on the first steps to testing and later deploying my training simulation to enhance and reinforce the user's capabilities for specific tasks or objectives.  Please see the images below that help demonstrate me gaining an understanding of developing a rendering engine as well as understanding virtual and mixed reality development, 3D geometrical structures, MBSE, system of system integration, GPUs, memory management, architecture development, software management and deployment, and the OpenGL rendering pipeline.


Note: It is important to note that although these skills focused on software engineering and visualization they are easily transferrable to different use cases outside of those expressed here.  Also the developed engine can be expanded to support various use cases, graphical environments, or functionality.

Visualization of Basic Geometrical Shapes Using a CAVE System

The below image shows a 4 walled CAVE system using two projectors per wall to represent stereoscopic depth when using polarized glasses and a tracking system to visualize geometrical objects in a three-dimensional capacity simulating virtual reality.


3D Models Fully Rendered by the Newly Developed Rendering Engine

As a good testing medium models from the The Stanford 3D Scanning Repository where used to test in developing a system capable of rendering heavy geometrical data of scanned models.  At the time these models served as great models for testing and ensuring that the engine and code were developed efficiently due to their level of complexity.  This also assisted me in generating functionality that guaranteed that I incorporated the ability to visualize both simple and complex geometry to configure mixed reality stimulation environments.  The engine could also be used to visualize environments as a desktop application, which I used to initially build simulation environments to better understand the configurations needed for my future research projects.


icon

Armadillo
Source: Stanford University Computer Graphics Laboratory
Scanner: Cyberware 3030 MS
Number of scans: 114 (but only 60-70 were used in vripped model)
Total size of scans: 3,390,515 points (about 7,500,000 triangles)

Armadillo Geometrical Reconstruction within the Engine


Testing Visualizing Every Vertices of Complex Models using The Stanford Bunny

icon icon
Stanford Bunny
Source: Stanford University Computer Graphics Laboratory
Scanner: Cyberware 3030 MS
Number of scans: 10
Total size of scans: 362,272 points (about 725,000 triangles)
This was one of the first big test that led to the development of material and lighting systems, as well as properly storing and managing memory after importing geometry.  As you can see though each image below, I rendered specific amounts of vertices to start incorporating code to support storing the entire model and defining a proper structure to import the geometrical data.  

Testing, Capturing, and Visualizing Model Data Vertex by Vertex




Visualizing Geometrical Data Output to Monitor Incoming Data


Generating the final Geometrical Data Structure Model






Particle System and Emitter Creation Sample

Particle System and Emitter Creation Using C# and C++

This is the creation of a particle system and emitter that allows the user to directly control the particle system and emitter type properties over time.  Currently I am using the video to show a small sample of my work for a bigger project.



Alanzo Granville - Particle System

Tuesday, June 17, 2014

Using Blender to Make a Basic Human Model

Using Blender to Develop A Basic Human Model



Human Model Notes Blender:

In the creation of any game you always need some type of avatar.  In my case I would like to create a mystical magic rpg type of game using Unity3D and Blender.

Creating the Avatar

It all begins with a square.  This square will be used to make a basic human model in a T-stance.

It is always a good idea to delete half of the object that you will be using for mirroring your model.

The mirror modifier can be found by adding a modifier and selecting mirror.  If you are not familiar with Blender go the the wrench that is on the panel to the right.
 

 You can also add a subdivions modifier for your model as well via the add modifier option, similar to using the mirroring modifier, but you will selected subdivisions instead.


 Now you are ready to start experimenting with your human model or any type of model that you would like to build.  In my case I choose to do a basic human model in which I will use later with Unity3D.

This part can take some time, but once you have gotten your mirror and subdivision modifiers you can build any type of model on one side and the opposite side will be created automatically.

Note:  Increasing the view value under the subdivision modifiers can make your model look smoother and have a more organic appearance.

Tip:  A good way to start off getting familiar with modeling in the x,y,z coordinate plane is to draw a front view of your model, and side view on a piece of paper or any other drawing software that can be saved on your computer.  You can bring those images into Blender or many other modeling software and use them as a guide to created great 3D models.  For instance the front image will be loaded into Blenders front view, while the side view of the drawing will be loaded up into a side view (I'll show an example in the near future of how to do this approach to building sound 3D geometrical models).




Show me what you can create, and always don't be afraid to experiment and mess up as long as you save before doing so.  Good luck.

This was just a basic introduction to Blender.  Next I will introduce you to a basic worm tutorial using Unity3D that I recreated from the Tornado Twins.  After that lets get to the building our game!


Additional Content: 
Unity 3D Engine Explode Demonstration (Used for Augmented Reality Demonstration)