Expand the world of perception!

Overview of the new possibilities in iOS 13

iOS devices have had access to Augmented reality (AR) for some time now. When Apple started integrating specialized chips inside iPhones and iPads, iOS appeared to be the best augmented reality platform of our time. Millions of iOS users gained the opportunity to experience the apps of the future. The more that developers tend to extend the apps beyond the well-known interfaces to the augmented ones that aim at blurring the border between the real and virtual worlds. Regardless of whether it is a game or a utility app, you can place 2D and 3D objects inside the user's surroundings. The immersive experience gives the user an impression of operating inside the real-world and stirs the imagination. It opens up a multitude of possibilities for creating apps that can be used both in a commercial or non-commercial way.

With the latest release of iOS, both developers and organisations have even more tools to envision and create these experiences. Why? Changes introduced to the whole technological stack made a difference. As a result, the overall performance is better. The new AR stack covers spacial audio and photo-realistic rendering enhanced with powerful animations and physics. It goes together with some basic, although essential, features like shadows, natural materials, blur, or camera effects. Powered by tight integration with low-level graphical frameworks, CPUs and GPUs, iOS brings ultimately stunning effects. It runs smoothly and without interruptions.

Let’s take a closer look at the augmented reality technology and new possibilities introduced lately together with the release of iOS 13.

Virtual objects

An AR scene can contain both real objects captured by the camera and virtual ones rendered by the app. Implementation of a virtual object to the app requires an anchor, for example, an easily identifiable real object like a table or a thing delivered with the app like a game board.

iOS can detect up to 100 images at the same time together with the image scale and its quality. This speedup was achieved thanks to employing machine learning to facilitate plane detection and environment understanding. It is far easier now to use real objects regardless of the surrounding setup. iOS detects more objects, more accurately. The scenes can be built using ready compositions of objects or dynamically, using meshes, materials and textures.

An AR scene can contain both real objects captured by the camera and virtual ones rendered by the app. Implementation of a virtual object to the app requires an anchor, for example, an easily identifiable real object like a table or a thing delivered with the app like a game board.

iOS can detect up to 100 images at the same time together with the image scale and its quality. This speedup was achieved thanks to employing machine learning to facilitate plane detection and environment understanding. It is far easier now to use real objects regardless of the surrounding setup. iOS detects more objects, more accurately. The scenes can be built using ready compositions of objects or dynamically, using meshes, materials and textures.

However, there are more types of surface on which we can build the augmented world. They correspond to planes or objects that commonly you can find indoors. These are tables, seats, walls, ceilings, floors, doors or windows. What helps here significantly is a new technology:  ray casting. It is an advanced machine learning algorithm which analyses the environment and can dynamically adjust the object placement depending on changes to the tracked planes, like distance, perspective or size. Moreover, iOS supports now HDR (high-dynamic-range) environment textures that improves the quality of virtual objects, especially in very bright environments. The content is more vibrant and consequently blends better with the environment.

Capture

An app can capture real moving objects and track their position. It is possible even to track real people that happen to be a part of the scene. iOS recognises key parts of the human skeleton and different poses of the body. It is easy then to integrate physical activities with the rest of the app world. It refers to such actions as, for example, fitness exercises, virtual character movement, game-plays with real objects or assistance to a user in visualizing the body movement. It was not possible before without specialized, expensive equipment. Body detection works in two modes: 2D and 3D. In the 3D option, we get extra information that allows us to know how big the person is.

Thanks to general improvements, like the depth of the field effects, we gain also more advanced blending with the app environment. It is possible owing to a complex algorithm that adjusts camera focus and gathers information about real objects. In this way, virtual objects behave and look like real. The algorithm affects not only a static look as lighting but also a characteristic blur of faster-moving objects or quickly-moving camera. The effect is synthesized in real-time and rendered on the top layer of virtual objects. Without it, virtual objects would stand out and look out-of-place, eventually spoiling the whole illusion.

Every camera produces some grain. They are visible, particularly, in low-light conditions. Without special adjustments, virtual objects will shine effectively, which will deteriorate the whole augmented reality experience. Fortunately, the latest version of the AR stack in iOS 13 solves this problem. It extracts grains from the environment and applies them to virtual objects to compose a scene with a homogeneous quality of rendering.

 

People occlusion

Augmented reality scenes involving real people pose a challenge. Identification of which extra objects are behind and which are in front of the person is not easy. When we move, the situation is dynamic. Thanks to the enhanced-people-detection and the scene-understanding, it is possible to track the movement automatically. Machine learning and depth estimation techniques help here a lot. Apple placed on a specialised chip Neural Engine that is in charge of the real-time movement detection. Consequently, it is unnecessary to prepare a green screen to facilitate scene arrangement.

 

Face tracking

In AR apps there are such experiences that require a user to look at their own face or see the face when looking at the scene. You can also interact with the scene using the face. iOS enables you to track up to three faces simultaneously. Additionally face tracking can be enabled together with world tracking, which means that the app can use the front and back camera at once. In this scenario, face traits are available to be represented in the scene displayed in front of the user. You can trigger the mimics of a virtual character in a similar way to Apple’s Memojis. Face tracking uses the True Depth camera found in the latest iOS devices. This gives access to the power behind the Face ID authentication mechanism which unlocks the device with just a glance at it.

 

Collaborative sessions

A lot of AR apps, especially gaming ones, are designed to engage more people at the same time. In the context of the above, it may seem challenging not only to track every single player but also enable interaction between them and the scene. Even more spice is added when the players exit the game scene and re-enter it after a time. After all, their features should remain the same. The iOS answer to these challenges is a collaborative session, which supports consistent live experiences of multiple people inside the visualized world. This shared-world setup provides a foundation to construct a truly interactive play, rising realism together with user involvement. People inside the world build common world maps by exchanging information across a peer-to-peer network. The data is sent automatically in real-time from many people, not just two. Coordination is established for fluid change of the person who controls the experience.

 

Coaching

AR brings new experiences. It is the reason why the users need some guidance on how to start using it and assistance while setting a game or getting back in the game after a time. For a beginner, it is not so easy to find a surface to place objects on or to detect image scales. iOS unifies this coaching with the help of the interface that is common to users and developers. The unification is that every iOS application can use the same training interface, so users get used to it and developers have less work. This interface is a set of translucent elements displayed on the stage. The programmer can choose which of these ready elements to use. In no case does this resemble a user manual. The unquestionable advantage is that owing to the coaching overlay, you can set your own coaching goals, and automatically they appear or disappear during AR experience, when necessary. With image scale detection, a printed image can be used as a base for which the virtual world will adjust to the estimated physical size of the image.  Preparation of the stage usually begins with finding a reference surface for our virtual world. Often a special picture, board, also in various sizes is provided. Not everyone has the same large table. iOS can assess the size of this surface and properly scale the visualized virtual world on it.

 

Composing

Let's be honest. It would be difficult to use the full potential of such advanced technology without tools, which automate and assist in the creation of the Augmented Reality.  Apple delivers a Reality Composer, which is a graphical tool that enables building a scene without coding. It makes experimenting cheap and fun.

Reality Composer is a tool for application developers and creators. It supports both iOS and macOS. Developers can create scenes ready for integration with an app. Creators can export their work to the newly designed format USDZ, which the user can play and preview.

Reality Composer comes with a library of virtual objects, animations, styles or shapes to customise and exploit in AR scenes. Animations are hooked up to user interactions. It is even possible to include some audio.

Testing AR experiences is particularly difficult. AR requires prototyping and often continuous tweaking of some scene parameters. Reality Composer can replay previously recorded AR scenes to reiterate and improve them outside the test area, where perfect conditions for testing has been set up. Recording of the scene saves contents of the scene together with sensor data making for complete package to work on without interruption.

 The great advantage of Reality Composer is that since it supports iOS then it is possible to design and improve and run the scenes on the same device, your iPhone and iPad.

 

We use cookies to improve performance and enhance your experience. By continuing to use this website you are agreeing to use our cookies.