tempura2-compressor.gif

Mobile AR Experiments

 
 

Design & Development
Jeff Chang

My own personal exploration into augmented reality with Apple's ARkit and Unity.

I will be placing my own design learnings and notes here.

 
 

Managing Expectations

During the initialization phase, I noticed most people would spawn the visuals up close, without realizing its size. In my first iteration, users were almost always forced to take a couple steps back which is not ideal. In one case, someone already had their back against the wall and were forced to move to a comfortable spot.

Although users could always re-position the character afterwards, I’d like to give them a sense of what to expect beforehand.

My solution was to put in a holographic preview to indicate size and placement. This lets users know what to expect and the amount of space needed for the experience. It’s a small thing, but it’s something I hope to decrease frustration.

 
 

Visual Indicators

Mobile AR is usually restricted to a small field of view. It’s like looking through a small window.

Unlike wearable AR, it often demands a different type posture from the user.

Should a user ever turn around or move away, it can be hard to retrace what they’re looking at. To account for this, I mocked up what a visual indicator that leads users back to the main content.

 
 

Moving Characters Across the Plane

In this exercise, I wanted to be able to place my characters and control them in a real world environment. This involved creating UI that allowed me to spawn the tempuras, move them through a joystick, and perform a set number of actions. 

For a less cluttered screen, I would consider actions that let the user touch and drag the characters to ‘walk’ along the plane. Tapping the characters to perform actions would be another way to reduce UI clutter.

 
 

Machine Learning

I believe that powerful AR experiences can be driven through machine learning. Machine learning provides context in how visual information can be displayed. This can be object/image recognition, geography, and even tone of voice.

In this exploration, I used the core ML Inception model to provide contextual information as to what the corgi was ‘seeing.’ It uses Apple’s Vision framework to analyze what is on screen and returns a class label of its best guesses.

 
 

Object Recognition

I wanted to take this project further by training a very simple model with Create ML to recognize the difference between sushi and pizza.

Here, the corgi now changes form depending on the type of food that is presented in front of the camera.