In today's world, ARKit is a topic that arouses great interest and debate in different areas of society. Its relevance and diversity of approaches has led to extensive discussion and reflection on its implications. From academic perspectives to the everyday environment, ARKit has generated endless questions and positions that seek to understand its scope and impact on our reality. In this article, we will delve into a detailed analysis of ARKit, exploring its different aspects and offering a comprehensive vision to understand its importance and current challenges.
ARKit is an application programming interface (API) for iOS, iPadOS and VisionOS which lets third-party developers build augmented reality apps, taking advantage of a device's camera, CPU, GPU, and motion sensors.[1][2] The ARKit functionality is only available to users of devices with Apple A9 and later processors. According to Apple, this is because "these processors deliver breakthrough performance that enables fast scene understanding and lets you build detailed and compelling virtual content on top of real-world scenes."[3] The SDK was first released for IOS 11 in 2017, and was preinstalled in the initial release of IPadOS 13 in 2019 and visionOS 1.0 in 2024. In visionOS, however, ARKit plays a lesser role in augmented reality than in iOS and iPadOS. ARKit in visionOS is focused on acquiring data about the person’s surroundings, while SwiftUI and RealityKit control the placement of any 2D or 3D content in the person’s surroundings, and SwiftUI or UIKit are used to build windows with an app's content.[4][5]