< Back

Design Challenge - Wheel Simulator

Making something interesting out of vague requirements

This is a technical design challenge I was given during a job application process at a VR (virtual reality) focused company. I was given a few days to work on my challenge submission. While I didn't have the time to implement all the features I initially envisioned, I did complete most of the work I intended within the timeframe and I'm quite proud of the outcome.

The Challenge

Let's start off by sharing the challenge criteria I was given, verbatim:

Design Technical Test

  1. Create a scene with a controllable avatar
  2. Create a platform with a button that the avatar can press
  3. Create a Proxy wheel.
  4. Have the first button press cause the proxy (temporary) wheel to rotate and move forward.
  5. Create a fully realized (texture/modeled) wheel.
  6. Have the second button press cause the proxy wheel to change to the fully realized wheel but keep the same animation.
  7. Have a third button press change the wheel back to the proxy wheel and do the animation again.

Bonus points:

Rig the wheel so it uses a boned animation (you can make the wheel more complicated if you want to support this)

If you would like to skip the process and go directly to the final Unity deliverables, please click here.

As you can see the provided criteria left a lot of room for questions, so before commencing work I went ahead and emailed my questions to the manager.

My questions

My first question got the answer "There is no requirement. It does not have to be a VR avatar. We’re not looking for your capability in VR, we’re looking at your capability in Unity."

For every other question I asked, I received the same answer, copy pasted as a reply, which was: "This is ambiguous on purpose. 😊"

While the abundance of ambiguity was unexpected, I recognize an invitation for creative liberty when I see one, and I was ready to design and develop.

Spongebob walking off while saying "I'm Ready"
I'm Ready

The design process

In the ideal world, the design process usually starts with a discovery phase to determine the user's needs or wants through research and analysis. In this case, though, I was given a set of ambiguous criteria to fulfil. While this isn't the ideal circumstance to start the design process, it does happen, and we can still make the best of it.

I decided to set two goals for myself at the start of the project:

These goals acted as my guiding principles as I made decisions. While they may not sound like much, they were going to help me move forward at every step, and for that reason it's important that they were established and written down.

"Create a scene with a controllable avatar"

Since myself and the company whose design challenge I was working on share a common background in VR experiences, I naturally wondered if this criteria was given to test my ability to independently create a functional VR project. When I asked about this, I received the only explicit response I was able get, which was that this criteria is to test my capabilities in Unity, not specifically VR.

Evaluating avatar types

With that in mind, I took into consideration the three common forms of "controllable avatar" that I felt were applicable to this challenge.

First-person VR

Early on, it was clear that there was nothing that suggested first-person VR would add any value to the experience. If anything, it would create a less accessible experience, since someone who wanted to run a first-person VR project would have more equipment requirements (a VR-ready computer setup). While I was willing to make the assumption that most of those who will receive the project would have decent GPU's (GTX 1050ti or better), I was not willing to make the assumption that they all had VR-ready setups. At a later step in the interview process, both of theses assumptions were validated. In addition to the benefit of improved accessibility, a non-VR approach would also reduce challenges during development, as in my experience VR often introduces additional complexities to the development process.

Third-person non-VR

Between first-person and third-person non-VR, I took special consideration of implementation feasibility and the UX (user experience) of each. A third-person camera had a few drawbacks when evaluated with this criteria. For a third-person camera, I would require a player model, this player model would either have to be very crude (like a primitive object) or would need to be an actual humanoid or equivalent model with rigging and basic animations. Even if this challenge was overcome through the addition of an external package, there would still be challenges around the camera positioning, the camera clipping, and the player model blocking the view of interactable elements. Third-person cameras have their strengths and their appropriate applications, but this was clearly not it.

Screengrab of Unity's third-person avatar asset package
Even if someone else (the package author) did most of the hard work for me, what was the benefit of introducing all this complexity and nonsense just for the user to ignore it and interact with a single wheel?
First-person non-VR

By contrast, a first-person camera would remove all the above challenges, while also elevating the UX by making it easier to focus on the interactable elements, but also heightening the experience by giving the user a sense of being present in the virtual space.

Since this was first and foremost a Technical Design challenge, I opted to use Unity's First Person Character Controller Starter Asset to address this criteria. This meant I could focus my programming efforts on more custom functionality, rather than building a generic first-person camera player avatar from scratch for essentially no real value.

2D format

I also briefly considered a 2D-based implementation as well, but given that this company had an environment where employees worked almost exclusively in 3D, choosing a 2D format seemed risky as it could either come off as creative or come off as inappropriate for the context.

"Create a platform with a button that the avatar can press"

Creating the platform

For the creation of the platform, I didn't want to do the classic "random plane floating in the middle of the default Unity environment" setup. I opted instead for the setup of a floating platform in an mysterious abyss. I know this project is implied to be a prototype, but the word prototype was never explicitly used, so I decided to use my creative freedom given by ambiguity to put some extra effort, care and attention to make it not feel like an ugly placeholder prototype.

3D platform in a black void
Modelled as modular pieces in Blender, then assembled in Unity. Uses a simple triplanar material included in the Unity starter asset pack.

Creating the button

While I could have easily obtained a premade button asset, I decided to create a button from scratch. This challenge was the perfect opportunity to elevate the button interaction to be as tactile and satisfying as possible. I opted for a simple push button design. I tuned the size for a good balance of visibility and elegance. I applied a "medium poly" approach to the modelling process, just like the fully realized wheel.

Shaded wireframe of a 3D button
A simple, but elegant, two piece button modelled in Blender
Button material

I decided to use a uniform material style that the wheel model to come would also be authored in. The button itself has an albedo, roughness and metallic values that imitate a glossy plastic. The button border uses values that resemble a matte gold finish.

PBR material reference sheet
I like to use references when possible for defining PBR materials
Button feel

I particularly enjoy fine-tuning user interactions for affordance, so that they feel satisfying. Affordance is an object's property that prompts possible actions. It provides cues for user interaction, whether physical or digital. The sensation we associate with how interacting with a button "feels" is the accumulation of haptic, visual and auditory response to the user's input.

Gif of a mechanical switch being pressed
Think of what factors makes a keyboard enjoyable to type on for you

While I won't have any say over haptic feedback, since this will be running on PC and intended to be controlled with a keyboard and mouse, I can still fine tune the visual and auditory feedback to replicate the satisfaction of pressing a nice button.

First, I created both the press and release keyframe animations. I tuned the animations to closely mimic a hand confidently pressing down on the button. This meant that the button would have a swift animation with slight ease and an overshoot — and definitely not a "snap". These animations are then synced with the state of the user's keypress. If the user holds down their physical key on their keyboard, the virtual button would remain pressed until the user releases their physical key. These animation states are managed and transitioned via an animator.

Animation curve
The resulting animation curve for pressing in the button

The tuned animations are also paired with separate "thump" and "click" sound effects to produce the complimentary auditory sound effects.

Gif of 3D button animation
This gif will never do it justice. Please try the demo yourself (towards end of this post) for the full satisfying experience.

"Create a Proxy wheel"

My interpretation of the term "proxy wheel" was a placeholder version of the fully realized wheel I would go on to create. Rather than using the basic default cylinder with non-uniform scale, I decided to custom model a proxy wheel in Blender to fit perfectly over the fully realized wheel.

Screenshot of 3D cylinder
It's... a proxy wheel. Woo 😐

"Have the first button press cause the proxy (temporary) wheel to rotate and move forward"

Here's a criteria that initially seemed straightforward to me until I broke it down.

"Have the first button press cause..." —  That's simple enough, you press the button and it triggers an action. "... cause the proxy (temporary) wheel to rotate and move forward" — Okay, the wheel starts rotating and moves forward as it rotates, like a wheel on a vehicle rolling forward. While making this happen is easy enough to accomplish, we now have to deal with the fact that the wheel is indefinitely moving forward. Should the player chase the wheel? Does it loop back? The idea of the wheel infinitely traveling in the same direction is impractical due to the implications of having the wheel constantly traveling away from the user and the button.

I decided to go for the looping option, but wanted to put a twist on it. I've always enjoyed working with dynamic, physics-enabled interactions. I had first pictured the wheel following a looping track, and then it hit me: Why does the track have to be flat? Why does the wheel need to be on rails? I saw an opportunity here to have the wheel move forward inside a looped track, constrained to one plane of movement, but free to roll and bounce around within the track as physics allowed it. This decision made my project more engaging to work on, test, and play with.

A loop shaped 3D platform
Modelled in Blender, uses a simple triplanar material included in the Unity starter asset pack

To optimize and simplify the physics calculations, I also imported a separate mesh that would serve as the collider for the track.

An inverted loop shaped mesh
An optimized inverted mesh for the physics collider

I then spent some time setting up the wheel with the right collider, rigidbody and independent scripts. Then, I defined and tweaked a "rubber" physics material to create a convincing replication of a wheel with properly inflated tires bouncing around. The "animation" I've designed was simply applying a certain amount of rotational torque for a set duration, but with a limited top speed in case the user spams the button that controls the wheel. This made for a wheel that behaved dynamically in response to the user's button press timing and frequency.

Screenshot of physics component settings in Unity panel
The basic ingredients of a physics enabled wheel, tuned to look believable and interesting to engage with
Gif of a wheel rolling in the loop shaped platform
Put it altogether and you get these wonderful dynamic "animation"s of the "wheel moving forward"

"Create a fully realized (texture/modeled) wheel"

This criteria was fairly easy to interpret. The only decision I really needed to make was how I wanted to approach it. Even though I am a technical designer first and foremost, I have a history of wearing many hats and do know my way around 3D asset creation. This is where I decided to take the harder path to let some of my 3D asset creation experience shine. Rather than acquiring a license for a pre-existing model of a 3D wheel, I made a purposeful decision to create the asset myself from scratch. I grabbed a generic five spoke rim design as my reference, and off I went.

A shaded wireframe of a 3D wheel
The "medium poly" wheel modelled in Blender. The center cap is not ideal, but I decided to leave it alone to spend my time on higher priorities

I opted to go for a "medium-poly" model with no normal maps. While I wanted to create this asset from scratch, this was not a 3D art challenge. So, I chose a simplified workflow and art style, which allowed me to produce a high-quality asset without turning it into a time-sink. The "medium-poly" model in this case refers to a low-poly model that has just enough supporting edges to let the mesh normals work out to resemble the desired geometry, but without any use of subdivision or baking from high to low poly. Avoiding using subdivision means that I can more finely control the geometry density. And the sparing use of supporting edges means that I can add curvature where needed without relying on the process of baking from high to low poly and all the normal map quirks and issues that come along for the ride.

Fully textured and lit version of the same wheel
The finished product with PBR materials. The button and this wheel share one texture set

To clarify, this is not an approach I would take for a production environment unless there were very specific requirements or limitations that called for it. However, for this challenge where I had a simple hard-surface asset, it was an ideal option to strike that nice balance between speed of output and maintaining performance and fidelity.

"Have the second button press cause the proxy wheel to change to the fully realized wheel but keep the same animation"

This was where the phrasing of the criteria started to really test my interpretations. If we look at the beginning of the sentence "Have the second button press cause...", it's easy to jump to the conclusion that a second button with a second purpose is being requested. Upon closer inspection, the phrasing says the second button... press. My interpretation of this criteria was the second time the user presses the button, rather than a second, separate button to press. Based on this definition, I assigned a second function to the single button that fulfilled a new action.

As for the action, "...cause the proxy wheel to change to the fully realized wheel but keep the same animation" — The phrasing of "change to" left some ambiguity, but it was clear that some form of swap/transition is expected to be triggered by the second button press.

Gif of 3D wheel transitioning between proxy and full state
A simple boolean parameter that triggers the appropriate animation transition makes for a simple, reliable and independent visual state control

"Have a third button press change the wheel back to the proxy wheel and do the animation again"

This criteria gives us the third function that the button will serve.

The complete button press sequence

  1. Proxy wheel moves forward
  2. Proxy wheel transitions to fully realized wheel, without interrupting the ongoing animation
  3. Fully realized wheel transition back to proxy wheel, while the move forward animation is triggered again. Next press loops to first action

What didn't make it into the experience

Among ideas that I envisioned but didn't have the capacity to create into the experience, the most significant component was an onboarding (tutorial) sequence. I wanted to build a simple interactive sequence at the beginning, asking the first time user to use the mouse to look around, move to a certain point with the WASD keys, and press a button to reveal the main experience.

This evolved to a little sign that says "click me" next to the button (you use left mouse to press the button) and a reliance on the assumption that most of the users who try this will have already been familiarized with the standard WASD control scheme. It's not ideal, but for the time constraints, it was a necessary simplification.

The experience

Below you can find a link to download the windows version of Wheel Simulator. If you don't have the time or equipment to run it, I have also provided a screen capture.

I highly recommend that you try experiencing it yourself!

Controls

More from Arda