I used App “Niantic Wayfarer” to scan the room and get the 3D space model on my app, and I can preview the 3D scene model.
I can see two rooms, including a living room and a bedroom.
However, after I uploaded this model to the server and re-downloaded the scene model from the server, there was only one room left.
So, how can I get the whole scene model? Why can not I get a complete room model like I previewed on my phone?
Hi Xuheng,
Thank you for your message. Unfortunately, scans made with the Wayfarer app are subject to chunking if the scanning process exceeds 60 seconds. Chunking occurs when the scanned area is divided into smaller meshes instead of forming one large mesh, as previewed in the app.
The good news is that Wayfarer has been deprecated in favor of Scaniverse. Scaniverse allows you to create and export scans and Gaussian Splats directly from the app for free. I encourage you to explore our documentation article Scaniverse for Lightship for a detailed explanation on how to get started with Scaniverse for Lightship and what other features and benefits transitioning will have.
Please don’t hesitate to reach out if you have any questions or need further assistance.
Kind regards,
Maverick L.
Hi Marverick L.
I had read the documentation which you recommended. It is really awesome. Thank you so much. By the way, I want to know more about the limitations of the reconstruction area. Is there a maximal area restriction? is the VPS free to all kinds of use including commercial use? Thank you.
Hi Xuheng,
I’m glad to hear you found our documentation helpful!
Scaniverse doesn’t have a strict size limit, but larger scans may take more processing time and could lose smaller details. For accurate VPS localization, we recommend keeping scans to a maximum of 10 meters in diameter.
Lastly, VPS is currently free for commercial use for up to 50,000 monthly active users.
Kind regards,
Maverick L.
Ok, I got it. Thank you so much.
I have scanned a large space under 100 square meters. It takes about 7 mins and has around 6000 images. I uploaded it to the servers, but after 24 hours, it still shows under processing. So I want to know how long it will take to process such a space. Thank you. What should I do next? Does it work properly?
Firstly, I’d like to apologize for the delay in response and thank you for your patience!
Has this scan gone through since your last message? If not, are you on Android or iOS? Are you comfortable breaking up your scan into multiple smaller scans to encompass the entire area and lower processing times and upload sizes?
I look forward to hearing back from you!
Best,
Maverick L.
I‘m extremely grateful for your help. I have asked your another colleague, And he helped to cancel the task. I just want to test the boundaries of this reconstruction capability. I also met another problem, Large space scan and reconstruction - #7 by Xuheng_Niu - I'm Stuck - Niantic SDK for Unity Community I have changed my scan app from “Niantic Wayfarer” to “Scaniverse”. However, it did worse than before. I don’t know what happens recently or any other updates.
You’re very welcome!
Are you still able to access the Wayfarer app? I would be curious to know how the scans compare from one app to the other. To ensure you get the best quality scans within Scaniverse, please take a moment to review the How To guide.
Now, the Wayfarer app is not available. I used Scaniverse.
I’m absolutely sure that I used Scaniverse correctly.
The scene in the upper half of this picture was scanned at 12.07.2024 using Scaniverse, and it works very well. The scene in the bottom half of this image was scanned at 19.01.2025 using Scanviverse. The same APP, the same room space. But I got a completely different result, and I tried many times, and it was all like this.
The Wayfarer app is still available on iOS.
Are you using iOS?
Yes, I’m using iOS. But my point is that when I scan the scene, after processing by the server, I can’t get a complete mesh. What can I do to improve it to get a complete mesh?
If your goal is to capture a complete mesh of an object you’ve scanned, you wouldn’t want to use the Geospatial Browser functionality to capture a scan for a point-of-interest (POI). POI scans are taken to facilitate localization – not to create a mesh for users to download.
You would want to use Scaniverse’s Mesh scanning function in Area mode and download the resulting mesh from within your in-app Library afterward using the Share > Export function.
Geospatial Browser scans are subject to chunking after approximately a minute of scanning which means that the mesh you download might be a chunk or have portions of the mesh you’re interested in excluded – as you’re currently seeing.
I would encourage you to take another scan from Scaniverse without the Geospatial Browser.
I look forward to hearing how it goes!
Kind regards,
Maverick L.
Thank you for your reply and explanation. My ultimate goal is to access he VPS service for the whole space. If I understand correctly, after scanning and uploading to the Geospatial Browser using Scanvierse, and downloading the Mesh to my local system following server processing, only the locations where the mesh exists can be scanned for positioning, rather than the entire space being scanned. If I understand correctly, after I scan with Scanvierse and upload it to Geospatial Browser, it goes through the processing of the server and then downloads the mesh to the local computer, and then the place where the mesh exists can be scanned and located, not the entire space scanned. I wonder if I can scan the entire space and then use the entire space to provide location services from any location, rather than just a few small ones that are then divided Block. I want to use a VPS in as much space as possible, is this possible?
You’re very welcome!
For localization, I wouldn’t recommend scanning the entire room but a point-of-interest (POI) within that room. Most likely, with the entire room used to train localization, you’ll encounter strange behavior where tracking isn’t accurate; objects aren’t where you expect them to be. If you scan a painting on a wall or something else static in the space, you will see much better results. What you will bring into Unity should be this POI within your room, you will place content relative to where that object is. If you need to create a scan of your entire room to get a complete mesh, that is okay too. Alternatively, you could create a Playback set of your room so you don’t have to keep exporting your app to your device for testing purposes.
Training localization is about training the computer to identify your POI when it’s presented with it – rather than to get a mesh for human consumption.
Think Scaniverse Mesh and Splat mode for visual scans and Scaniverse Geospatial Browser for localization and POI identification.
Kind regards,
Maverick L.
Thank you for your patient explanation.
take this picture as an example. I want to use VPS in Room A and Room B. How can I realize it.
For your explanation, scanning Room A and Room B simultaneously is impossible. I have to scan Room A and then scan Room B, then I’ll get two POIs. I have to switch the POIs manually. The app can’t switch the POIs in Room A and Room B based on where you are automatically. is it right? Thank you.
You’re very welcome!
I would pick two POIs within those rooms. Room A could have something on your nightstand as a POI and Room B could have a chair or something unique that’s near the center of the room as a POI.
For switching between POIs automatically, you should look into the VPS Coverage Client – specifically how it allows you to import test scans/private locations and switch between them when the user gets close to them. Please review Querying VPS Coverage for AR Locations at Runtime in our documentation, and let me know if you have any additional questions or concerns.