Expand the world of perception

iOS 13 enhances AR capabilities with advanced tools, offering more realistic integration of virtual and real-world elements, encouraging innovative app development and user engagement.

An overview of the new possibilities in iOS 13

iOS devices have had access to Augmented reality (AR) for some time now. When Apple started integrating specialised chips inside iPhones and iPads, iOS appeared to be the best augmented reality platform of our time. Millions of iOS users gained the opportunity to experience the apps of the future. The increased developers’ tendencies to extend the apps beyond familiar interfaces to augmented ones that aim at blurring the border between the real and virtual worlds. Regardless of whether it is a game or a utility app, you can place 2D and 3D objects inside the user’s surroundings. The immersive experience gives the user an impression of operating inside the real world and stirs the imagination. It opens up a multitude of possibilities for creating apps that can be used both in a commercial or non-commercial way.

With the latest release of iOS, both developers and organisations now have even more tools to help envision and create these experiences. Changes that were introduced to the whole technological stack made a significant difference. As a result, the overall performance is better. The new AR stack covers spatial audio and photo-realistic rendering enhanced with powerful animations and physics. It goes together with some basic, although essential, features like shadows, natural materials, blur, or camera effects. Powered by tight integration with low-level graphical frameworks, CPUs and GPUs, iOS brings ultimately stunning effects. It runs smoothly and without interruptions.

Let’s take a closer look at the augmented reality technology and new possibilities introduced lately together with the release of iOS 13.

Virtual objects

An AR scene can contain both real objects captured by the camera and virtual ones rendered by the app. Implementation of a virtual object into the app requires an anchor, for example, an easily identifiable real object like a table or something that is specifically designed to work alongside the app as a game board.

iOS can detect up to 100 images at the same time, together with the image scale and its quality. This speedup was achieved thanks to employing machine learning to facilitate plane detection and environment understanding. It is far easier now to use real objects regardless of the surrounding setup. iOS detects more objects more accurately. The scenes can be built using ready compositions of objects or dynamically using meshes, materials and textures.

However, there are more types of surfaces on which we can build the augmented world. They correspond to planes or objects that you can commonly find indoors. These are tables, seats, walls, ceilings, floors, doors or windows. What helps significantly here is a new technology: ray casting. It is an advanced machine learning algorithm which analyses the environment and can dynamically adjust the object placement depending on changes to the tracked planes, like distance, perspective or size. Moreover, iOS now supports HDR (high-dynamic-range) environment textures, which improves the quality of virtual objects, especially in very bright environments. The content is more vibrant and consequently blends better with the environment.

Capture

An app can capture real moving objects and track their position. It is even possible to track real people that happen to be a part of the scene. iOS recognises key parts of the human skeleton and different poses of the body. It is thus easy to integrate physical activities with the rest of the app world. It refers to actions such as fitness exercises, virtual character movement, game-plays with real objects or assistance to a user in visualising the body movement. It was not possible before without specialised, expensive equipment. Body detection works in two modes: 2D and 3D. In the 3D option, we get extra information that allows us to know how big the person is.

Thanks to general improvements, like the depth of the field effects, we also gain a more advanced blending with the app environment. It is possible due to a sophisticated algorithm that adjusts camera focus and gathers information about real objects. In this way, virtual objects behave and look as though they are real. The algorithm affects not just the static look, such as lighting, but also a characteristic blur of faster-moving objects or a quickly-moving camera. The effect is synthesised in real time and rendered on the top layer of virtual objects. Without it, virtual objects would stand out and look out of place, eventually spoiling the illusion.

Every camera produces some grain. This is particularly visible in low-light conditions. Without special adjustments, virtual objects will shine effectively, which will deteriorate the whole augmented reality experience. Fortunately, the latest version of the AR stack in iOS 13 solves this problem. It extracts grains from the environment and applies them to virtual objects to compose a scene with a homogeneous quality of rendering.

People occlusion

Augmented reality scenes involving real people pose a challenge. Identification of other objects behind and in front of the person is not easy. When we move, the situation is dynamic. Thanks to the enhanced-people detection and scene understanding, it is possible to track the movement automatically. Machine learning and depth estimation techniques here help a lot. Apple uses a specialised chip, Neural Engine, that is in charge of real-time movement detection. Consequently, it is unnecessary to prepare a green screen to facilitate scene arrangement.

Face tracking

In AR apps, some experiences require a user to look at their face or see the face when looking at the scene. You can also interact with the scene using your face. iOS enables you to track up to three faces simultaneously. Additionally, face tracking can be activated together with world tracking, which means that the app can use the front and back cameras at once. In this scenario, face traits are available to be represented in the scene displayed in front of the user. You can trigger the mimics of a virtual character in a similar way to Apple’s Memojis. Face tracking uses the True Depth camera found in the latest iOS devices. This gives access to the power behind the Face ID authentication mechanism, unlocking the device with just a glance.

Collaborative sessions

A lot of AR apps, especially gaming ones, are designed to engage more people at the same time. It may seem challenging to not only track every single player but also enable interaction between them and the scene. Even more complexity is added when the players exit the game scene and re-enter it after some time. After all, their features should remain the same. The iOS answer to these challenges is a collaborative session, which supports consistent live experiences of multiple people inside the visualised world. This shared-world setup provides a foundation to construct a genuinely interactive play, raising realism together with user involvement. People inside the world build standard world maps by exchanging information across a peer-to-peer network. The data is sent automatically in real-time from multiple people, not just two. Coordination is established for a fluid change of the person who controls the experience.

Coaching

AR brings new experiences, which is why users need some guidance on how to start using it, as well as assistance while setting up a game or getting back into the game after some time. For a beginner, it is not as easy to find a surface to place objects on or to detect image scales. iOS unifies this coaching with the help of an interface that is common to users and developers. The unification is that every iOS application can use the same training interface, so users get used to it, and developers have less work. This interface is a set of translucent elements displayed on the stage. The programmer can choose which of these ready elements to use. In no case does this resemble a user manual. The unquestionable advantage is that owing to the coaching overlay, you can set your own coaching goals, and automatically, they appear or disappear during an AR experience when necessary. With image scale detection, a printed image can be used as a base, whereby the virtual world will adjust to the estimated physical size of the image. Preparation of this stage usually begins with finding a reference surface for our virtual world. Often, a special picture board, also in various sizes, is provided. Not everyone has the same large table. iOS can assess the size of this surface and adequately scale the visualised virtual world on it.

Composing

It is evident. It would be difficult to use the full potential of such advanced technology without the necessary tools which automate and assist in the creation of Augmented Reality. Apple delivers a Reality Composer, which is a graphical tool that enables the building of a scene without coding. It makes experimenting cheap and fun.

Reality Composer is a tool for application developers and creators. It supports both iOS and macOS. Developers can create scenes which are ready for integration with an app. Creators can export their work to the newly designed format USDZ, which the user can play and preview.

Reality Composer comes with a library of virtual objects, animations, styles or shapes to customise and exploit in AR scenes. Animations are hooked up to user interactions. It is even possible to include some audio.

Testing AR experiences is particularly tricky. AR requires prototyping and often continuous tweaking of some scene parameters. Reality Composer can replay previously recorded AR scenes to reiterate and improve them outside the test area, where perfect conditions for testing have been set up. When recording the scene, it saves the contents of the scene together with sensor data, making for the complete package to work on without interruption.

The great advantage of Reality Composer is that since it supports iOS, it is possible to design, improve and run the scenes on the same device, your iPhone and iPad.

Contact us.

If you need a partner in software development, we're here to help you.

We will respond to your enquiry immediately.