Photogrammetry & 3D Capture
Three approaches to capturing the real world in 3D — neural radiance fields, photogrammetry, and point clouds.
3
Capture Methods
9+
NeRF / Splat Scenes
Real-Time
3DGS Playback
iPhone
Primary Capture
Dino Skeleton Capture
Xaya Environment
Gal Band
Living Room
Stone Scene
Environment Capture
Overview
Capabilities
- →3D Gaussian Splatting
- →NeRF Reconstruction
- →Photogrammetry Meshes
- →Point Cloud Capture
- →Web Embed Integration
- →VR Ready
Process
Capture
Walk-around video or multi-angle photo capture using iPhone, LiDAR, or drone. Camera movement speed and overlap are calibrated per subject for optimal reconstruction quality.
Process & Reconstruct
Raw footage is processed through Luma AI, Nerfstudio, or RealityCapture depending on output target. Neural rendering produces volumetric scenes; photogrammetry generates meshes; LiDAR yields point clouds.
Refine & Optimize
Scene cleanup, mesh decimation, texture optimization, and format conversion. Outputs are optimized for their target platform — web delivery, VR environments, or 3D printing pipelines.
Deliver & Embed
Final outputs are embedded as interactive 3D scenes on the web (WebGL), exported as production-ready meshes (OBJ, FBX, GLB, USDZ), or archived as spatial data (PLY, LAS).
Use Cases & Applications
Virtual Tours
Create immersive walkthroughs of real estate, venues, and retail spaces that visitors can explore from any angle.
Product Visualization
Capture physical products in photorealistic 3D for e-commerce, allowing customers to inspect items before purchase.
Heritage Preservation
Digitally preserve historical sites, artifacts, and cultural landmarks with sub-millimeter accuracy.
Training & Simulation
Build realistic environments for employee training, safety simulations, and educational experiences.
Tools & Stack
Always evolving — adopting the best available tool for each project.