About Vuforia Smart Terrain

Smart Terrain is Vuforia's environment reconstruction technique using the video camera on your device and computer vision algorithms in software. The SDK provides a simple authoring workflow and event-driven programming model that may already be familiar to Unity developers. If your device includes a depth-sensing camera, it will use that. For standard device cameras, the internal processing is similar to photogrammetry used in other software for scanning 3D objects as small as a coin or statuette (using a camera and turntable) and as large as an outdoor statue or an entire building (using quadcopter drones). With a small mobile device with just a standard video camera, the capability of Smart Terrain scanning is limited compared with depth-sensing ones, but still quite effective.

The following image, from the Vuforia example Penguin app, shows an example 3D mesh generated for a table top stage with detection of prop objects of various shapes and sizes:

For the end user, the SDK requires the staging area be set up with an initialization target and a limited number of prop objects (maximum five). When the app starts, it scans the stage and props to construct the 3D terrain mesh. Then the mesh can be augmented in real time in Unity.

Smart Terrain builds a 3D mesh of the object surfaces in view. When it detects an object in the real world on the table, what they call props, a new object is added to your Unity scene. The object added for a prop is defined by a prop template; this defaults to a cube, automatically scaled approximately to the extends of the physical prop detected.

The preceding example image includes the following recognized objects (apologies if you are reading this in black and white):

  • A cylindrical can in the center of the table is the target object (Vuforia Object Recognition enabled), outlined with blue lines
  • A Smart Terrain 3D mesh for the table surface, drawn with green lines
  • Four props recognized (vase, cup, books, tissue box), represented with 3D cubes and drawn with cyan lines
  • Once identified at runtime, the mesh and props can be used to occlude your computer graphics. For example, our virtual ball can roll behind one of the props on the table surface and it will be occluded in the AR view. Props can also use colliders to interact with objects and physics.

While the SDK may be smart, it is not general purpose, and is relatively low resolution. Also, it is best suited for near-range table top setups that are not dynamically changing, in well-lit stable lighting conditions. Like other image and object targets that can be used in Vuforia based apps, the geometry should have patterns and details characteristics as required for natural feature tracking (NFT image targets) and does not work with reflective or transparent object surfaces. As described in the Vuforia docs:

In general, Smart Terrain has been designed to work with a wide variety of commonly occurring table surfaces found at home and in the office. Ideal stage surfaces are either plain or present a uniform density of feature, and should be visually distinct from near adjoining surfaces.

For more details see https://library.vuforia.com/articles/Training/Getting-Started-with-Smart-Terrain. There are some minimum system requirements that you should also check on that page.

We also recommend you look at the Vuforia example Penguin app for reference and instruction. We did. It can be found at https://library.vuforia.com/articles/Solution/Penguin-Smart-Terrain-Sample.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset