Augmented Reality is already helping businesses to create memorable experiences for their customers and employees. But moving from a 2D user interface to a 3D immersive experience is a different world of design and development. So, one of our mobile developers (and resident Augmented Reality Expert), breaks down how to approach AR app development.

What is AR and what kind of problems is it best suited to?

Augmented Reality (AR) is a way of superimposing dynamic content, especially 3D content, onto your view of the real world. At the moment there are two main types of AR:

  • Showing the content reflected in a transparent “heads-up-display” like the Microsoft HoloLens and the Magic Leap One. This currently requires expensive hardware and can be difficult to interact with.
  • Showing the content on a phone screen via the phone’s camera like iOS’s ARKit and Android’s CoreAR. This only requires a relatively high-end mobile device, and interacting with it is as simple as using a modern touchscreen.

We have found that there are two major applications of AR:

  • Recognising an image or existing real-world object in the camera view and then presenting the user with additional information based on the detected item.
  • Locking a 3D model in position in the real world to make it easy to view it by simply moving the device around it.

In general, our research has found that the most common applications for AR are centred around cases where the user requires additional information, particularly about their immediate area, e.g. guidance towards a destination.

Read our recap on Augmented Reality Uses.

Skills Needed for Augmented Reality App Development

Moving from creating a two-dimensional user interface for an app to a three-dimensional environment is literally a different world of design and development. Instead of placing views on a screen, we have to place nodes in three-dimensional space. This is made particularly difficult by the app having to detect suitable surfaces to place AR content on in the first place. Most forms of AR can currently detect horizontal and vertical surfaces, as well as pre-registered images. Once an app has detected a suitable surface, it will then either place content automatically, or it can prompt the user to specify a point where the content should be placed.

This content will be in the form of 3D models, which usually have to be created outside of the app with professional 3D/CAD software. The native OS itself is only be able to add very simple geometric shapes like cuboids, ovoids, cylinders, etc. In iOS, for example, these resources come from SceneKit, which is a crucial skill for anyone working in AR on iOS.

Depending on the format of the 3D content, the app developer may be required to also manually set up the materials, including diffuse (basic colour), transparent, reflective, luminance. Once this is set up, the app will have to add lighting to the scene to allow the 3D models to be rendered realistically. This can come included with the model or it might be based on actual ambient lighting conditions.

A lot of AR apps also use animation in their content to further enhance the realism of their content. This should be included in the 3D model data, and the app developer simply needs to begin the animation when necessary. Any audio that may be used to enhance the AR experience must also be synced with the displayed content. Sometimes AR apps have to also detect specific items. This is called “image recognition” and there are generally two approaches:

  • Recognising specific predetermined images, e.g. a photo in a book.
  • Recognising an arbitrary item e.g. an apple or a banana.

Predetermined images are used when accurate positioning or identification is required, e.g. showing a model of a specific table in a specific place in a book. Arbitrary image recognition is achieved via machine learning, and requires a lot of data to train, but can be more useful because it is more lenient to environmental factors such as lighting conditions, focus, and obscured views.

Approaching AR App Development

Like any app we make, the first questions we ask are always:

  • What platforms will you be used on? (iOS or Android)
  • What devices will this be used on? (phones or tablets)

Both iOS and Android have native AR libraries that can be used to create AR apps, ARKit and ARCore respectively. These libraries give mostly a parallel experience on both platforms. However, they still require each platform to be developed separately, which can be inefficient, especially if the app does the same thing on both platforms. In cases like this, 3rd party platforms such as Unity can help by providing a single place to develop AR apps which is simply compiled separately for each mobile OS.

Using a 3rd party like Unity has its own caveats, for example it’s written in C#, therefore a mobile developer who is more comfortable with Swift/Objective-C/Java/Kotlin may not be fluent with the language, but it is still familiar to them, and you will save time during development to create an app on each platform from a single project. This is the AR equivalent of something like Cordova or React-Native where the same code is used to make two apps.

If the app is using any image recognition then there are services such as Vuforia which works within Unity to provide pre-trained image recognition, but more advanced features such as machine learning-based image recognition sometimes require the app to be built natively on both platforms.

The Future of AR App Development

Despite the poor reception to Google Glass, there is a strong suspicion in the industry that a lot of major companies are currently developing AR glasses. The announcement earlier this year of the Intel Vaunt addressed the most common critique of AR glasses which is that are fundamentally ugly. The Vaunt was aesthetically similar to conventional spectacles. Although Intel cancelled this project, it demonstrated that the technology was possible.

There is some evidence to support the idea that Apple is working on their own AR glasses: ARKit 2 contains eye tracking. It is a common tactic for Apple to release APIs to developers for them to experiment with and get accustomed to even before announcing hardware. Apart from Apple’s emoji avatars, there doesn’t appear to be a clear and obvious use for eye tracking in iOS at the moment.

Credit: Matt Moss

What’s next?

So, you’re interested in developing an Augmented Reality app to help you connect with customers or employees. But where do you start? Building AR apps means building for a variety of software and hardware platforms. A familiarity with all of these is crucial. At Sonin App Development, we have experience quickly creating interactive prototypes, which has proven to be essential in a young industry like AR where practices are not as established as other market sectors.

If you’re interested in developing an AR app, then we’d love to hear from you. Give us a call on 01737 45 77 88 or send us a message using the form below.