天道酬勤,学无止境

Using ARKit anchor to place RealityKit Scene

I am currently struggling to add an RealityKit Scene to an ARKit anchor. As my previous question, it is not possible to directly add the RealityKit Scene to an ARKit Anchor Node.

But how can I use the ARKit Anchor to place my Scene? For example, I want to find an object in the room, and depending on which object I found, I want to use different scenes and anchor them in the room depending on the recognised objects position.

Thanks for the help.

Current try:

func makeUIView(context: Context) -> ARView {
        
        let arView = ARView(frame: .zero)
        
        // Load the "Box" scene from the "Experience" Reality File
        let boxAnchor = try! Experience.loadBox()
        
        // Add the box anchor to the scene
        arView.scene.anchors.append(boxAnchor)
        
        let configuration = ARWorldTrackingConfiguration()
        guard let referenceObjects = ARReferenceObject.referenceObjects(
                inGroupNamed: "AR Objects", bundle: nil) else {
            fatalError("Missing expected asset catalog resources.")
        }
        
        configuration.detectionObjects = referenceObjects
        arView.session.run(configuration) // not possible at arview, but i need arview for the .rcproject :-/
        
        return arView
        
    }

EDIT: add code

评论

ARKit has no nodes. SceneKit does.

However, if you need to place RealityKit's model into AR scene using ARKit's anchor, it's as simple as this:

import ARKit
import RealityKit

class ViewController: UIViewController {
    
    @IBOutlet var arView: ARView!
    let boxScene = try! Experience.loadBox()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        arView.session.delegate = self
        
        let entity = boxScene.steelBox
        
        var transform = simd_float4x4(diagonal: [1,1,1,1])
        transform.columns.3.z = -3.0  // three meters away
        
        let arkitAnchor = ARAnchor(name: "ARKit anchor",
                              transform: transform)
        
        let anchorEntity = AnchorEntity(anchor: arkitAnchor)
        anchorEntity.name = "RealityKit anchor"
        
        arView.session.add(anchor: arkitAnchor)
        anchor.addChild(entity)
        arView.scene.anchors.append(anchorEntity)
    }
}

It's possible thanks to the AnchorEntity convenience initializer:

convenience init(anchor: ARAnchor)


Also you can use session(_:didAdd:) and session(_:didUpdate:) instance methods:

extension ViewController: ARSessionDelegate {
        
    func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
            
        guard let objectAnchor = anchors.first as? ARObjectAnchor
        else { return }
            
        let anchor = AnchorEntity(anchor: objectAnchor)
        let entity = boxScene.steelBox
        anchor.addChild(entity)
        arView.session.add(anchor: objectAnchor)
        arView.scene.anchors.append(anchor)
    }
}

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • How to detect the 2D images using ARKit and RealityKit
    I want to detect the 2D images using ARKit and RealityKit. I don't want to use SceneKit because many implementations based on RealityKit. I couldn't find any examples detecting images on RealityKit. I referred https://developer.apple.com/documentation/arkit/detecting_images_in_an_ar_experience sample code from apple. It uses Scenekit and ARSCNViewDelegate let arConfiguration = ARWorldTrackingConfiguration() arConfiguration.planeDetection = [.vertical, .horizontal] arConfiguration.isLightEstimationEnabled = true arConfiguration.environmentTexturing = .automatic if let referenceImages =
  • Reality Composer - Custom Collision Between Entities of Different Scenes
    I'm pretty new to RealityKit and ARKit. I have two scenes in Reality Composer, one with a book image anchor and one with a horizontal plane anchor. The first scene with an image anchor has a cube attached to the top of it and the second scene built on a horizontal plane has two rings. All objects have a fixed collision. I'd like to run an animation when the rings and the cube touch. I couldn't find a way to do this in Reality Composer, so I made two attempts within the code to no avail. (I'm printing "collision started" just to test the collision code without the animation) Unfortunately, it
  • 使用ARAnchor插入节点和直接插入节点之间有什么区别?(What's the difference between using ARAnchor to insert a node and directly insert a node?)
    问题 在ARKit中,我发现了2种在hitTest之后插入节点的方法 插入ARAnchor,然后在renderer中创建节点(_ renderer:SCNSceneRenderer,nodeFor锚点:ARAnchor)-> SCNNode? let anchor = ARAnchor(transform:hit.worldTransform) sceneView.session.add(anchor:anchor) 直接插入节点node.position = SCNVector3(hit.worldTransform.columns.3.x, hit.worldTransform.columns.3.y, hit.worldTransform.columns.3.z) sceneView.scene.rootNode.addChildNode(node) 两者都希望为我工作,但是为什么要选择一种方法呢? 回答1 更新:从iOS 11.3(又名“ ARKit 1.5”)开始,在会话中添加ARAnchor (然后通过ARSCNViewDelegate回调将SceneKit内容与其关联)与仅将内容放置在SceneKit空间之间是有区别的。 在会话中添加锚点时,是在告诉ARKit世界空间中的某个点与您的应用有关。 然后,ARKit可以做一些额外的工作
  • ARKit视觉风暴 (1) ARKit增强现实概述
    《ARKit视觉风暴》是子羽于2019年录制的iOS端AR技术教程。在研究技术的时候从头到尾把 ARKit 开发时可能会遇到的坑踩了一遍,并且将开发过程尽可能的详细讲解。在牺牲了发量的代价下,终于录制完了这套视频教程,就算是零基础的同学也可以根据本套教程学会iOS端AR应用开发,希望帮助大家能够快速的学习ARKit技术,最终实现想要的效果。 本套课程从技术理念到项目实践,教大家系统掌握ARKit技术开发,随心打造iOS端AR增强现实应用。由一开始的开发环境搭建,到Unity ARKit Plugin、ARFoundation ARKit等不同时期技术的讲解。从平面模型放置、识别图片、手势交互、3D物体识别、面部AR贴纸、光照估计、环境探针、多人AR技术,甚至包含ARKit3.0的动作捕捉技术等。除了以上课程内容,更加入了随着技术更新与时俱进更新的ARKit连载技术教学内容。 相信各位同学已经迫不及待了,好了,接下来让我们开始ARKit之旅吧 ! 1.ARKit的传奇今生 ARKit是苹果在2017年WWDC推出的AR开发平台。开发人员可以使用这套工具为iPhone和iPad等iOS设备创建增强现实应用程序。 2.ARKit 有哪些强大功能 ? (一) 物体识别技术(Object Detection) ARKit Object Detection 可以扫描并检测现实世界中的3D物体
  • How to keep ARKit SCNNode in place
    Hey I'm trying to figure out. How to keep a simple node in place. As I walk around it in ARKit Code: func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { if let planeAnchor = anchor as? ARPlaneAnchor { if planeDetected == false { // Bool only allows 1 plane to be added planeDetected = true self.addPlane(node: node, anchor: planeAnchor) } } } This adds the SCNNode func addPlane(node: SCNNode, anchor: ARPlaneAnchor) { // We add the anchor plane here let showDebugVisuals = Bool() let plane = Plane(anchor, showDebugVisuals) planes[anchor] = plane node
  • What is the real benefit of using Raycast in ARKit and RealityKit?
    What is a ray-casting in ARKit and RealityKit for? And when I need to use a makeRaycastQuery instance method: func makeRaycastQuery(from point: CGPoint, allowing target: ARRaycastQuery.Target, alignment: ARRaycastQuery.TargetAlignment) -> ARRaycastQuery? Any help appreciated.
  • Dynamically change text of RealityKit entity
    I have created a very simple scene ("SpeechScene") using Reality Composer, with a single speech callout object ("Speech Bubble") anchored to a Face anchor. I have loaded this scene into code via the following: let speechAnchor = try! Experience.loadSpeechScene() arView.scene.anchors.append(speechAnchor) let bubble = (arView.scene as? Experience.SpeechScene)?.speechBubble It renders as expected. However, I would like to dynamically change the text of this existing entity. I found a similar question here, but it's unclear to me how to refer to the meshResource property of a vanilla RealityKit
  • RealityKit – Load another Scene from the same Reality Composer project
    I create an Augmented Reality Project using Xcode's template. Xcode creates a file called Experience.rcproject. This project contains a scene called Box and a cube called Steel Cube. I add 3 more scenes to Experience.rcproject, called alpha, bravo and delta. I run the project. Xcode runs these two lines // Load the "Box" scene from the "Experience" Reality File let boxAnchor = try! Experience.loadBoxX(namedFile: "Ground") // Add the box anchor to the scene arView.scene.anchors.append(boxAnchor) These lines load the Box scene from the Experience file. Once this scene is loaded how do I switch
  • LiDAR and RealityKit – Capture a Real World Texture for a Scanned Model
    Task I would like to capture a real-world texture and apply it to a 3D mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene. Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture. I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that
  • ARKit –转换矩阵中的不同列代表什么?(ARKit – What do the different columns in Transform Matrix represent?)
    问题 ARAnchor的4x4矩阵有4列。 矩阵的第四列包含x , y和z坐标的3个转换值。 我想知道其他3栏代表什么? 回答1 ARKit,RealityKit和SceneKit框架使用4 x 4 Transformation Matrices来平移,旋转,缩放和剪切3D对象(就像simd_float4x4矩阵类型一样)。 让我们看看这些矩阵的样子。 在3D图形中,我们经常使用带有16个有用元素的4 x 4矩阵。 身份4 x 4矩阵如下: 在这16个元素之间,有6个不同的剪切系数: shear XY shear XZ shear YX shear YZ shear ZX shear ZY 在Shear Matrix中,它们如下: 由于此矩阵中根本没有Rotation coefficients ,因此六个Shear coefficients和三个Scale coefficients可让您使用魔术三角学( sin和cos )围绕X , Y和Z轴旋转3D对象。 这是一个使用Shear和Scale元素如何围绕其Z轴旋转3D对象(CCW)的示例: 使用“剪切”和“缩放”元素查看3种不同的“旋转”模式: 并且,当然,在最后一列中有4 x 4 Matrix中要转换的元素( tx , ty , tz ): ┌ ┐ | 1 0 0 tx | | 0 1 0 ty | | 0 0 1 tz | |
  • 初探AR技术
    初探AR技术 AR概述概念定义:技术手段:参考实例: AR解决方案AR SDKARKit简介扩展 ARCore简介ARCore 的工作原理优缺点 Vuforia简介优缺点 Wikitude简介 EasyAR简介优缺点 Web AR 总结本文参考 AR概述 概念定义:   增强现实(Augmented Reality,简称AR),也有对应VR虚拟实境一词的翻译称为实拟虚境或扩张现实,是指透过摄影机影像的位置及角度精算并加上图像分析技术,让屏幕上的虚拟世界能够与现实世界场景进行结合与交互的技术。这种技术于1990年提出。随着随身电子产品运算能力的提升,增强现实的用途也越来越广。 ----WIKI   其实,AR只是VR技术中的一个分支(增强式 VR 系统),它即允许用户看到真实的世界,也能同时看到叠加在真实世界上的虚拟对象。 技术手段:   AR从其技术手段和表现形式上,可以明确分为大约两类:一是Vision based AR,即基于计算机视觉的AR,二是LBS based AR,即基于地理位置信息的AR Vision based AR   基于识别的方式学名叫做Vision based AR,即基于计算机视觉的AR,该方式是利用计算机视觉方法建立现实世界与屏幕之间的映射关系,使我们想要绘制的图形或是 3D 模型可以如同依附在现实物体上一般展现在屏幕上,如何做到这一点呢
  • Detect touch on SCNNode in ARKit
    I have placed a SCNNode (a plane) at the location of a recognized image in ARKit 1.5 beta. I would like to print a message to the console when the plane is tapped on. So far I have this code: // MARK: - ARSCNViewDelegate (Image detection results) /// - Tag: ARImageAnchor-Visualizing func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { guard let imageAnchor = anchor as? ARImageAnchor else { return } let referenceImage = imageAnchor.referenceImage updateQueue.async { // Create a plane to visualize the initial position of the detected image. let plane =
  • Dictionary with values as array on appending one at a time, remains empty
    In ARKit, what I am trying to do is gather a bunch of positions of node placed in my scene and then average those out so that the node movements are not jittery as what happens while using ARKit. Hence, I have a variable declared and initialised as a Dictionary with values as an Array of vector_float3. (I am thinking this is more of a Swift problem than ARKit problem, is it?) var extentOfnodesAddedInScene: [SCNNode: [vector_float3]] = [:] This is related to SceneKit/ ARKit. Within the renderer function which keeps detecting and updating horizontal planes, I want to append the array of vector
  • How to track image anchors after initial detection in ARKit 1.5?
    I'm trying ARKit 1.5 with image recognition and, as we can read in the code of the sample project from Apple: Image anchors are not tracked after initial detection, so create an animation that limits the duration for which the plane visualization appears. An ARImageAnchor doesn't have a center: vector_float3 like ARPlaneAnchor has, and I cannot find how I can track the detected image anchors. I would like to achieve something like in this video, that is, to have a fix image, button, label, whatever, staying on top of the detected image, and I don't understand how I can achieve this. Here is
  • How to record video in RealityKit?
    I have a RealityKit project in Xcode and I want to record the ARView. I considered ReplayKit, but that is for screen recording, I want to record only the ARView with its camera feed. I considered the open source project ARVideoKit by AFathi but that doesn't support RealityKit... something about different rendering paths. I have found a Medium article which describes how to implement a recording feature in an ARKit app, but the problem is that it requires the method: func renderer(_ renderer: SCNSceneRenderer) which is not available in RealityKit because it is specifically a SceneKit method.
  • How to deallocate RealityKit ARView()?
    I cannot manage to release my RealityKit ARView() from memory. I am aware that there are (were?) similar issues with ARKit + SceneKit – with workarounds like this one: https://stackoverflow.com/a/53919730/7826293 which doesen´t solve my problem unfortunately. The solutions above kind of work by removing everything "suspicious" manually. That is exactly what I did in a even wider scope: class ViewController_ARViewMemoryTest: UIViewController { var arView: ARView? init() { super.init(nibName: nil, bundle: nil) arView = ARView(frame: .zero) } required init?(coder: NSCoder) { fatalError("init
  • FPS drop when adding child to a scene ARKit/SceneKit
    I'm working on an ARKit project for 4 months now. I noticed that when adding a child to my scene rootNode, there is a FPS drop. The device freezes for less than a second. I did a lot of research and trials, noticed that all Apple's code examples have this FPS drop too when placing an object. It does not matter if the node is added directly (scene.rootNode.addChild(child)) or if it's added in the renderer loop at different phases (didUpdateAtTime, didApplyAnimations etc...). I found that once an object has been added to a scene, the next added object will render immediately. I use a 3D model
  • ARKIT : place object on a plane doesn't work properly
    I am learning ARKit and trying to place an object on a detected plane. But it doesn't work properly and there's a space between the plane and the 3D object. here's my code for the plane detection : func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { position = SCNVector3Make(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z) guard let planeAnchor = anchor as? ARPlaneAnchor else { return } let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.z)) planeNode = SCNNode(geometry
  • ARKit Place a SCNNode facing the camera
    I'm using ARKit to display 3D objects. I managed to place the nodes in the real world in front of the user (aka the camera). But I don't manage to make them to face the camera when I drop them. let tap_point=CGPoint(x: x, y: y) let results=arscn_view.hitTest(tap_point, types: .estimatedHorizontalPlane) guard results.count>0 else{ return } guard let r=results.first else{ return } let hit_tf=SCNMatrix4(r.worldTransform) let new_pos=SCNVector3Make(hit_tf.m41, hit_tf.m42+Float(0.2), hit_tf.m43) guard let scene=SCNScene(named: file_name) else{ return } guard let node=scene.rootNode.childNode
  • What is ARAnchor exactly?
    I'm trying to understand and use ARKit. But there is one thing that I cannot fully understand. Apple said about ARAnchor: A real-world position and orientation that can be used for placing objects in an AR scene. But that's not enough. So my questions are: What is ARAnchor exactly? What are the differences between anchors and feature points? Is ARAnchor just part of feature points? And how does ARKit determines its anchors?