What on earth is this "Commercial Vehicle Simulator"?
Commercial Vehicle Simulator is a project that was intended to be a VR based semi-truck simulator. It was being developed with an entirely unique hexapod based motion platform that would have been able to replicate the motions of a semi-truck in 6 degrees of freedom. It would have featured a high-end VR headset and hand tracking alongside controls taken from actual OEM trucks.
My role
Initially, I joined the project with the goal of focusing on XR design. There were areas where I got to do that, but today's article is not about XR at all (sorry for the ol' switcheroo). See, I have my share of a background working in traditional UI/UX and touchscreen environments and as it turns out, there was a big demand for these skills on the project, with no one else to fill in the demand.
This demand came from what would be referred to as the "inspections" part of semi-truck operation. Before, during, and after their trips drivers are required to perform regular inspections to ensure their trucks are road legal and safe to operate. For these inspections they would be required to get out of the truck and perform detailed inspections by walking around the truck, opening, poking and prodding various components. Our simulator would facilitate learning, practicing, and evaluation of the performed inspections.
This naturally did not play with the challenges of VR aka letting the user locomote through a big space while being strapped into a motion platform. The solution that we decided to go with was a companion application. One that would go on a touchscreen tethered to the motion platform. Between driving lessons, the user could take off the headset, take a break from VR and navigate around the truck with their finger, somewhat like an iPad game. This ended up solving two problems in one. The aforementioned problems of locomotion logistics, but also VR fatigue. By moving inspections out of VR, we would let users naturally take a break from the fatiguing environment of VR, without having them necessarily take a break from learning and practicing new skills.
The discovery phase
I was dropped into the responsibility of designing this experience while it was in a state of infancy, with nothing more than an idea and a written document capturing some of the intentions of the other designers on our team. At this point I had to effectively collect two different kinds of information:
- What on earth even happens during a real world semi-truck inspection?
- What does the average truck driver persona look like?
Turns out the first question was actually being handled by one of my fellow designers. They were digging through various curriculums and creating their best approximations of the inspections processes. I was able to get a pretty good idea of what happens during an inspection through their research. At a later point we would get a walkthrough by some of the instructors to get more hands on information as well.
The main challenge turned out to be the second part - "What does the average truck driver persona look like?". I had two main resources for answering this question.
First was what we called "Subject Material Experts", they were a mixture of instructors and drivers for various trucking companies involved in the project. While it wasn't the largest sample of size of people, they would be a great source for getting an understanding of what type of people come through these semi-truck courses and what challenges they face during the learning process.
The second resource was the user data that the designers had collected from the previous VR based simulators that had been developed by the company. Though these simulators weren't a perfect source for data, they did cater to similar demographics of individuals learning to operate heavy machinery for the first time.
Assumptions
Rather than create an elaborate persona, I wanted to distill the users into a list of assumptions. Simple, sensical assumptions backed by what we had seen across the Source Material Experts and the interactions seen in the previous simulators. These assumptions would be the foundation on which my design grows out of. So let's get into the assumptions that I presented to my team:
- No guarantee of user being tech savvy
- At minimum, has limited exposure to everyday tech items like phones, computers and TVs
- Understands basic concepts of interactions with above devices
- Has a smartphone and is fairly comfortable with touch-based tapping, dragging and pinching
- Mostly uses phone for everyday tasks (calls, web browsing, emails and so on)
- No experience with 3D applications or games
- First instinct for problem solving is to stab item of interest with sausage fingers 👉📱
That last assumption about the stabbing items of interest with "sausage fingers" would organically turn into a key representation of the target demographic for this inspections applications. We even had some of the less tech savvy members of the team voluntarily self-identify as "fellow sausage fingers".
Understanding the process of inspecting a truck IRL
Since this project will aim to produce a digital approximation of the real world process of inspecting a semi-truck, it's important that I develop an understanding of 'what am I even trying to replicate'.
Through a demonstration by a partnered trucking company I would get to see that the process is mostly a visual check, but with some points of physical interaction and some reliance on other senses like touch and smell.
Movement
The driver physically moves around the truck. This could be entering or exiting the cabin, walking around the truck, ducking to look under things or climbing the truck.
Interaction
The driver also needs to physically manipulate certain parts of the truck. They will be interacting with certain switches, they will be opening and closing things like the hood and in some cases will physically touch and tug at points of interest to check for proper fit.
Feedback
As the driver moves around the truck, interacting with various points, they are seeking out valuable visual (sight), auditory (sound), haptic (touch) and even olfactory (smell) feedback to evaluate for functionality of the truck components.
Mental checklist
Our early research had suggested that the drivers had a physical checklist that they walk through and tick off. This is partially true for certain organizations, but it turns out that the process is mostly a mental checklist. In the case of driver in the learning phase, they will have evaluations where the supervisor does have a checklist that they compare the driver's actions against.
(My) Goals
Equipped with a fresh grasp of the future users of inspections and a set of assumptions set in place, I now needed to set some goals in place to dictate what I would do with these assumptions, why I would do the things I do with these assumptions and why I would do the things I do with these assumptions. Thus, I present the goals that I presented to my team:
Reduce the experiences “learning curve” to a minimum
Why:
To let the user focus on inspecting the truck, rather than focus on figuring out controls/interface.
How:
Design around basic interactions the user is likely to already be familiar with (tapping, dragging and pinching), avoid high dexterity or complex inputs (gestures, virtual analog sticks, simultaneous inputs etc.).
Do not influence the users decisions
Why:
To avoid inadvertently guiding users on what areas to inspect and to avoid influencing their evaluation scores.
How:
Avoid prompting the user to interact with items (outside of onboarding/tutorial).
Have help resources readily available at all times
Why:
Referring to help materials should not interrupt the complicated sequences required for inspections.
How:
Have resources gathered under one tab that can be accessed at any time regardless of state.
Transitioning from discovery phase to design phase is always a bit of challenge. In the discovery phase you end up with... discoveries — and these discoveries are often captured as thoughts that are put in writing. How do we transition from thoughts put in writing to a tangible design that is expressed in visual and actionable deliverables?
Information architecture(-ish)
At this stage I often go into an activity that's somewhere between an architecture information diagram and a card sort. I start off by looking at the problems I've identified in the discovery phase and think about what tools might I need to provide the user to solve their problems. In this case we're focused on the movement, interaction, feedback and the checklist outlined in understanding the understanding truck inspections section.
At this point these tools are still very conceptual and high level, but this will get us one step closer to visual structure.
Next I like to think about what relationships do these tools have with each other and how can I create groupings that reflect the commonalities between them.
Drag & Pinch
These will ideally be interacted through the means of gestures on the touchscreen
Contextual pop-ups
These are pop-ups tied to individual truck components and will be contextual. They will be contextual in both what object they are tied to, but also in terms of the active state of the truck. An example of this is an ignition switch could be in one of three different states.
Resource tab
This a tab that will carry all of the resources that a student driver may have access to such as the truck manual or text from the textbook they learned from. In addition to this it will contain help infographics for remembering the touch based interface interaction methods.
Checklist tab
This is a tab that would serve as a parallel to the "mental checklist" aspect of the truck inspection. This would provide the checklist of items to inspect during the learning process.
Defect tab
This is a tab that the driver would use to mark components as defective or functional. In the real world parallel the driver would have an instructor present and the driver would verbally express what they are doing and verbally express their findings. Since this digital parallel does not have another human present to monitor and evaluate the actions of the learner, this would be how they express their findings. In early stage wireframing we pretty quickly realized the redundancy of this being a separate tab and merged it's functionality into the checklist tab.
Wireframing
This is good time to quickly mention the intended design of the camera system that will bring this all together. The camera was imagined as being on some kind of fixed range of motion and articulation with the user only being able to drag to pan or rotate within a fixed range and then pinch to dolly or zoom according to the camera region. The camera regions would effectively be nested with in larger camera regions. One quick example of this goes as such: The user is at the exterior view of the truck, they tap the door and tap enter, now the camera has transitioned to the interior. From here the user can hit a back button in the top right corner to go back up a level or tap another region of the interior to produce a contextual action for traveling another layer down. An example of this could be the switch cluster, as the user might need a closer view to differentiate the individual switches or see their physical states.
I will be elaborating on this system quite a bit more in part 2 as we get into Unity prototyping and the design of the actual systems driving the camera.
In a way this experience started to resemble a bit of a point-and-click adventure game experience, but with somewhat different goals and priorities.
Below is the interactive Figma prototype made using the wireframes. Just start clicking and scrolling to discover its treasures or use the arrows at the bottom of the frame to flip through the states.
High fidelity mock ups
At this point, we were meant to have a third party company pick up where I left off and establish the final visual design. Unfortunately, after a long period of silence, our team would discover that the third party would not be picking up the design work. I am sharing this because it had some unfortunate side effects to the overall design process and the output that came of it.
Colors and typography
This was the first aspect to suffer from the circumstances. Due to the rushed timeline our team was put in, we needed to quickly come up with a color set and typography for Inspections that would not only address the UX, accessibility and technical needs of Inspections, but also the same needs of the VR simulation. The colors and font seen following were purely driven by the existing UI requirements for the VR component of the project and were largely driven by a combination of reducing cyber-sickness in VR and the existing fonts that would be rendered in VR for the street-sign system.
To put it simply, I had no real power to decide any of the colors or fonts that were to be used in the Inspections mockups.
Layout and spacing system
Before the arrangement with the third-party fell through, the plan was that they would supply the final design documents and I would implement them in Unity. The change meant that I would be have to now finish the designs and implement them in Unity, but with no additional time.
Luckily in the Unity prototyping and research I had done before I had already become very familiar with Unity's new lightweight UI Toolkit. I knew that with the right simplified style and a good layout system I could fairly rapidly design and implement the UI without any dedicated artist intervention.
The real key in this process was utilizing a pseudo 8 point grid system paired with Unity's new UI Toolkit system which leverages web-style box model layouts driven by document based style sheets. This meant that if I used the new UI Toolkit properly, I could have a handful of simple color, typography and spacing styles defined that would systemically drive the whole UI. This also sets the right kind of foundation for scaling the UI in the future. I will be talking more about this is part 2 where I share the Unity prototyping and final implementation process.
You may notice below that the final layout has quite a few little differences from the wireframes above. These changes occurred from discoveries that were made during Unity prototyping and implementation. I will be discussing those finding in part 2!