Get started with building apps for spatial computing

WWDC23 Video
Notes

Overview

This video has four sections.

The "Fundamentals" section covers a number of topics: Foundational Elements, Input Methods, Collaboration, Privacy, and Developer Tools.

The "Where to start" section is very short and it serves as an introduction to the demo projects and a brief description of how iOS and iPadOS apps work on visionOS.

The "How to build" section looks at the Hello World demo project in detail.

The "Next steps" section lists other spatial computing videos to watch.

Fundamentals

Foundational Elements

Windows, Volumes, Spaces are the foundational elements of Spatial Computing. There are two types of spaces: shared and full.

There is one "shared space" for visionOS. Your app launches into it by default. Apps live side-by-side like on the Mac. Your app can go into a "full space" where only it is visible. You can use passthrough and ARKit in a full space.

Your app can consist of multiple windows and volumes. Windows are similar to windows that you find on macOS. Volumes are bounded 3D spaces that can be viewed from multiple angles. They use RealityKit.

Input

You look at something and tap your fingers together to interact with an element in visionOS. You can also reach out and "physically" tap a button in space.

The system automatically detects gestures and you can get access to them through SwiftUI and RealityKit. You will need to use ARKit for skeletal hand tracking.

Collaboration & Privacy

Use SharePlay and Group Activities frameworks to give users the ability to collaborate. You can share any window in visionOS and there is a new concept called "Shared Context" which is managed by the system. Design video and Build video

You do not have access to the sensors directly. The system does not communicate to the app where the person is looking. It renders a hover effect on the element that the person is looking at. More detailed information about the user's environment and hand tracking requires asking permission from the user.

Developer Tools

You use Xcode for development and previews of the 3D layouts in your app. The Simulator can be used to test interaction with simulated gestures. Runtime visualizations with plan estimations, collision shapes, and more work in the simulator and on device. Instruments has been updated with RealityKit trace.

Reality Composer Pro is used to preview and prepare 3D content for visionOS apps. RealityKit has materials that interact with real world lighting conditions. You can create custom materials using the MaterialX standard. For previewing, scenes can be directly sent to the visionOS device without being contained in an app.

You can use Unity to build immersive visionOS apps. Existing apps or brand new.

Ways to get started

When creating a new app you can specify if the initial scene is either a window or a volume. You can also add a secondary full space scene. There are three example projects: Destination Video, a shared space app; Happy Beam, an immersive game; and Hello World, an app that demonstrates how to transition between shared space and full space. Develop your first immersive app video

Existing iOS and iPadOS apps run on visionOS. The iPad variant will run for universal apps but iPhone only apps are supported. Apps run in Light Mode only.

How to build

Windows, Volumes, and Spaces can be viewed as a spectrum of immersion. "More presence to deeper immersion".

Add a WindowGroup to a SwiftUI Scene to make a Window. Add a Model3D object to display 3D content using RealityKit. SwiftUI gestures work in a window. New SwiftUI gestures in visionOS include RotateGesture3D, SpatialTapGesture, and more.

Volumes are extensions of Windows that are ideal for 3D content. They can contain multiple views. They are built for the Shared Space but can be used in a Full Space. Create a Volume by using WindowGroup with a .windowStyle(.volumetric).

RealityView is a new SwiftUI view that lets you interact with RealityKit. It has three parameters: make closure, update closure, and attachments view builder. The make closure is for your 3D content. The update closure is where you add your attachments to the 3D content. Find your attachments with tags. The attachments view builder is where you create your SwiftUI views to attach. RealityKit video 1, RealityKit video 2

A Full Space hides all other apps. ImmersiveSpace is a SwiftUI scene used for full spaces. You can set the .immersiveStyle to either .mixed, .progressive, or .full. Progressive is the default. It allows the user to use the digital crown to control the level of immersion like Environments in visionOS.

Next Steps

Watch more videos: Principles of spatial design, Meet SwiftUI for spatial computing, Build spatial experiences with RealityKit, and Meet Reality Composer Pro