Chapter 14: Completing the AR Game with the Universal Render Pipeline

This chapter completes the Augmented Reality (AR) project we started in the previous chapter. We'll extend the project by adding custom logic to detect a horizontal or vertical surface and spawn the turtle on the detected surface by tapping the screen.

An indispensable tool for creating an AR game is the ability to detect features in the environment. A feature could be anything from a face to a specific image or QR code. In this chapter, we will leverage the tools built into AR Foundation to detect surfaces, which Unity calls planes. We'll take advantage of prebuilt components and GameObjects to visualize the planes, which makes debugging easier. We will be able to see in real time when Unity has detected a plane, the size of the detected plane, and its location in the world.

We'll then write custom logic that will extract the plane data generated by Unity and make it accessible to our scripts. We do this by ray casting from the device into the physical world.

Once we've detected a plane and extracted the plane data, we'll add a visual marker for the player. The marker will only appear on screen when the device is pointing at a valid plane. And when it's on screen, it will be placed at the correct position and rotation of the real-world surface. With that done, we'll move on to spawning the turtle that we created in the previous chapter at the marker's location.

We created this project using Unity's new Universal Render Pipeline (URP), and we'll build on that in this chapter by adding post-processing effects using the URP. The URP has been designed to provide control over how Unity renders a frame without the need to write any code. We've previously touched on adding post-processing effects in Chapter 11, Entering Virtual Reality. However, the process differs slightly using the URP, so once you've completed this chapter, you'll be able to add these effects whether you are using Unity's built-in render pipeline or the URP.

In this chapter, we will cover the following topics:

  • How to detect planes
  • Visualizing planes by generating GameObjects
  • Retrieving plane data including the position and rotation
  • Adding a marker to visualize suitable spawn locations
  • Spawning the turtle object on a horizontal or vertical surface in the real world
  • Adding post-processing effects using the URP

By the end of this chapter, you will have created an AR project in which you can spawn objects onto surfaces in the real world.

Technical requirements

This chapter assumes that you have not only completed the projects from the previous chapters but also have a good, basic knowledge of C# scripting generally, though not necessarily in Unity. This project is a direct continuation of the project started in Chapter 13, Creating an Augmented Reality Game Using AR Foundation.

The starting project and assets can be found in the book's companion files in the Chapter14/Start folder. You can start here and follow along with this chapter if you don't have your own project already. The end project can be found in the Chapter14/End folder.

Detecting planes

An essential part of programming an AR game is adding the ability to detect features in the environment. These features can be objects, faces, images, or in our case, planes. A plane is any flat surface with a specific dimension and boundary points. Once we've detected a plane, we can use its details to spawn a turtle at the correct position and with the proper rotation.

We'll detect planes using ray casting and a custom script. However, before we write the script, we'll first add a Plane Manager to the scene.

Adding a Plane Manager

A Plane Manager will generate virtual objects that represent the planes in our environment. We'll use these virtual objects as a guide to where we can spawn the turtle. It will also provide useful debug information by drawing a boundary around any planes it detects. Using this feature, we can see in real time when Unity has detected a plane:

  1. Open the ARScene located in the Assets/Scene folder.
  2. As we created a prefab in the previous chapter, delete the Turtle object from the scene.
  3. Right-click in the Hierarchy panel and select XR | AR Default Plane. This object will be generated by our Plane Manager whenever it detects a plane:
    Figure 14.1 – Adding an AR Default Plane object

    Figure 14.1 – Adding an AR Default Plane object

  4. Drag the AR Default Plane to the Assets/Prefabs folder to create a prefab:
    Figure 14.2 – Creating an AR Plane prefab

    Figure 14.2 – Creating an AR Plane prefab

  5. As we'll only need its prefab, delete the AR Default Plane object from the scene.

Suppose you select the AR Default Plane object and view its data in the Inspector. In that case, you'll notice it comes with several components already attached, including the AR Plane and AR Plane Mesh Visualizer scripts. The AR Plane represents the plane and includes useful data on it, including the boundary, center point, and alignment. The AR Plane Mesh Visualizer generates a mesh for each plane. It is this component that will be used to create and update the visuals for each plane. We will see this in action shortly:

Figure 14.3 – AR Default Plane components

Figure 14.3 – AR Default Plane components

Next, we'll assign the prefab to a Plane Manager, so it is generated during runtime:

  1. Add an AR Plane Manager to the scene by selecting the AR Session Origin object in the Hierarchy panel and adding the AR Plane Manager component:
    Figure 14.4 – Adding an AR Plane Manager

    Figure 14.4 – Adding an AR Plane Manager

  2. Assign the plane we created previously to the Plane Prefab field by dragging the AR Default Plane from the Assets/Prefabs folder to the field:
    Figure 14.5 – Assigning a Plane to the Plane Prefab

    Figure 14.5 – Assigning a Plane to the Plane Prefab

    The AR Plane Manager will generate GameObjects for each detected plane. The GameObject it generates is defined by this field.

  3. Build and run the game on your device using the instructions from Chapter 13, Creating an Augmented Reality Game Using AR Foundation:
Figure 14.6 – Drawing surface boundaries

Figure 14.6 – Drawing surface boundaries

You'll notice blank lines are generated as you move around your environment. These lines represent the boundaries of planes detected by Unity. Each boundary is one AR Plane GameObject that has been spawned into the environment by the AR Plane Manager. As you move the device around your environment, the bounds should expand.

Important Note

The boundary points of a plane are always convex.

Without writing a single line of code, we can now detect planes in the environment and visualize them using Unity GameObjects. Great! Next, we need to retrieve the data associated with the detected plane, which will eventually be used to spawn the turtle.

Retrieving plane data

In the last chapter, we spawned a turtle in the world based on the device's position when the game started. Ideally, we would have control over where the turtle is placed. Instead of having it spawned when the game starts based on the phone's position, we can generate the object dynamically at a location we specify. To do this, we need to retrieve the plane data associated with the surface that is on the player's screen. To do this, we'll write a custom script:

  1. Right-click in the Assets/Scripts folder.
  2. In the context menu that appears, select Create | C# Script:
    Figure 14.7 – Creating a new script for plane detection

    Figure 14.7 – Creating a new script for plane detection

  3. Name the script FindPlane and add the following code:

    using UnityEngine.XR.ARFoundation;

    using UnityEngine.Events;

    public class PlaneData

    {

        public Vector3 Position { get; set; }

        public Quaternion Rotation { get; set; }

    }

    public class FindPlane : MonoBehaviour

    {

        public UnityAction<PlaneData> OnValidPlaneFound;

        public UnityAction OnValidPlaneNotFound;

        private ARRaycastManager RaycastManager;

        private readonly Vector3 ViewportCenter = new       Vector3(0.5f, 0.5f);

    }

    The following points summarize the preceding code snippet:

    We create two classes: PlaneData and FindPlane.

    PlaneData is the structure we'll use to store the Position and Rotation of the plane. We'll use this shortly.

    To start the FindPlane class, we've added four member variables.

    OnValidPlaneFound is a UnityAction that is invoked whenever a plane has been found. We can write classes that subscribe to this event and then whenever a plane is found, we will receive a PlaneData object. Subscribing to actions will be explained in detail when we come to spawn objects.

    OnValidPlandNotFound will be raised on every frame in which a plane hasn't been found.

    The RaycastManager of type ARRaycastManager is used in a very similar way to how we've used raycasts in previous chapters; however, instead of casting rays in the virtual world, the ARRaycastManager can detect features in the real world, including planes. This is perfect for our needs. This class is part of the ARFoundation package that we imported in the previous chapter.

    ViewpointCenter is used to find the center of the screen. It's from this point that the ray will originate.

  4. Add the following Awake and Update functions:

    public class FindPlane : MonoBehaviour

    {

        void Awake()

        {

            RaycastManager = GetComponent<ARRaycastManager>();

        }

        void Update()

        {

            IList<ARRaycastHit> hits = GetPlaneHits();

            UpdateSubscribers(hits);

        }

    }

    The Awake function initializes the RaycastManager and the Update function is where the action happens. It calls GetPlaneHits, which returns a collection of ARRaycastHit. This collection is then passed to UpdateSubscribers.

    Tip

    The Awake function is called during initialization and before other event functions such as Start and OnEnable. The Update function is called every frame before the LateUpdate event and any Coroutine updates. For more information on the order of events, see https://docs.unity3d.com/Manual/ExecutionOrder.html.

  5. Add the GetPlaneHits function now:

    public class FindPlane : MonoBehaviour

    {

        …

        private List<ARRaycastHit> GetPlaneHits()

        {

            Vector3 screenCenter = Camera. main.          ViewportToScreenPoint(ViewportCenter);

            List<ARRaycastHit> hits = new

              List<ARRaycastHit>();

            RaycastManager.Raycast(screenCenter,            hits, UnityEngine.XR.ARSubsystems.            TrackableType. PlaneWithinPolygon);

            return hits;

        }

    The following points summarize the preceding code snippet:

    • The center of the screen is found by obtaining a reference to the main camera using Camera.main and then calling ViewportToScreenPoint. This function converts viewport coordinates into screen space. Viewport space is normalized between 0, 0 (top-right of viewport) to 1, 1 (bottom-left). Therefore, by passing 0.5, 0.5, we return the center of the viewport.

      Tip

      Camera.main will return a reference to the first enabled camera that has the MainCamera tag.

      We then create a new List of ARRaycastHit. This collection will store the results of the raycast. An ARRaycastHit contains useful data about the raycast, including the hit point's position, which will be very useful.

      We pass this list as a reference to RaycastManager.Raycast. This function performs the raycast and fills the hits collection with any raycast hits. If there were no hits, the collection would be empty. The third parameter of RaycastManager.Raycast is TrackableType, which lets Unity know the type of objects we are interested in. Passing the PlaneWithinPolygon mask here means the ray needs to intersect within a polygon generated by the Plane Manager we added in the Adding a plane manager section. This will become clear when we come to draw a placement marker later in this chapter, as the placement marker will only be drawn within the bounds of a plane.

      We could pass a different value as the TrackableType, such as Face or FeaturePoint, to detect different objects. For all TrackableType varieties, see https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARSubsystems.TrackableType.html.

      The collection of hits is then returned from the function.

  6. Lastly, add the UpdateSubscribers function. This function will update any subscribers with the data contained in hits:

    public class FindPlane : MonoBehaviour

    {

        …

        private void UpdateSubscribers(IList<ARRaycastHit>      hits)

        {

            bool validPositionFound = hits.Count > 0;

            if (validPositionFound)

            {

                PlaneData Plane = new PlaneData

                {

                    Position = hits[0].pose.position,

                    Rotation = hits[0].pose.rotation

                };

                OnValidPlaneFound?.Invoke(Plane);

            }

            else

            {

                OnValidPlaneNotFound?.Invoke();

            }

        }

    }

    The following points summarize the preceding code snippet:

    If the hits collection size is greater than 0, we know that the RaycastManager has found a valid plane, so we set validPositionFound to true.

    If validPositionFound is true, we create a new PlaneData using the position and rotation of the pose in the first ARRaycastHit contained in the hits collection. When the collection of ARRaycastHit is populated by the RaycastManager.Raycast function, it is sorted so that the first element will contain information on the hit point closest to the raycast's origin, which in this case is the player's device. Once the PlaneData has been created, we pass it to the OnValidPlaneFound action. This will alert all subscribers that we've found a plane.

    If validPositionFound is false, we invoke OnValidPlaneNotFound. This alerts all subscribers that a plane was not found in this frame.

    Tip

    The ?. after the OnValidPlaneFound and OnValidPlaneNotFound is called a null-conditional operator. If the Unity actions are null, the attempt to call Invoke will evaluate to null, rather than cause a runtime NullReferenceException. For our purposes, it is similar to writing the following:

    if(OnValidPlaneFound != null) OnValidPlaneFound.Invoke(Plane);

    For more information on this operator, see https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#null-conditional-operators--and-.

Now we have the code that detects plans and alerts subscribers, let's add it to the scene:

  1. Back in Unity, select the AR Session Origin object in the Hierarchy panel.
  2. Add an AR Raycast Manager component, as shown in Figure 14.8, by selecting Component | UnityEngine.XR.ARFoundation | AR Raycast Manager from the Application menu. The script we just wrote will rely on this to perform the raycasts.
  3. Add our FindPlane component:
Figure 14.8 – Adding the FindPlane component

Figure 14.8 – Adding the FindPlane component

This section has covered an important topic in AR. Feature detection is used in many AR projects, and by reaching this point, you have learned how to not only detect surfaces in the real world, but also how to extract useful information about the detection – information that we will be using shortly to spawn objects. We have also added and configured a Plane Manager. This manager object will help us interact with the AR environment and provides useful debugging information by drawing the boundaries of any planes it discovers. Creating the manager involved adding a new Plane Manager object to the scene and assigning it a newly created plane prefab. This prefab included a component used to visualize the plane and an AR Plane component that stores useful data about the surface, including its size. With the Plane Manager correctly configured, we then wrote a custom script that uses an ARRaycastManager to detect surfaces in the real world.

Now we have a reliable method of detecting a plane. When we run our game, FindPlane will attempt to detect a plane at each frame. At the moment, even if a frame is found, nothing happens with that information. We've created two actions, OnValidPlaneFound and OnValidPlaneNotFound, but we haven't written any class that subscribes to those events. That is about change as we write the functionality to place a visual marker whenever a plane is detected.

Adding a placement marker

In this section, we'll design a visual cue that the player can use to determine when and where they can place an object. The marker will use the logic we created in the Detecting planes section to determine when a valid plane was found. There are two steps to adding the marker to our scene. First, we'll design the marker in Unity, and then we'll add the logic for placing the marker on valid surfaces in the real world.

Designing the marker

To add a placement marker to the game, we first need to design it. In our project, the marker will be a simple circle platform. We'll use many of Unity's built-in tools to create the marker, and the only external resource we will require is a simple circle texture. Start by creating a new GameObject in our scene:

  1. Right-click in the Hierarchy and select Create Empty to create a new GameObject.
  2. Name the new object Placement Marker:
    Figure 14.9 – Placement Marker in Hierarchy

    Figure 14.9 – Placement Marker in Hierarchy

  3. Right-click on the Placement Marker object and select 3D Object | Quad to create the visuals for the marker.
  4. Select the newly created Quad and set its scale to 0.1, 0.1, 0.1.
  5. Rotate the Quad 90 degrees on the x axis:
Figure 14.10 – Placement Marker child component

Figure 14.10 – Placement Marker child component

With the object created, we can modify its appearance by creating a custom Material:

  1. Right-click in the Assets/Materials folder in the Project panel.
  2. Select Create | Material.
  3. Name the new material Marker Material:
    Figure 14.11 – Creating a new material

    Figure 14.11 – Creating a new material

  4. Drag the circle image from the Chapter13/Start folder to the Assets/Textures folder in Unity. Create the Textures folder if it doesn’t already exist.

Now we can update the material with the new texture. With the Marker Material selected, do the following in the Inspector:

  1. Change the Surface Type to Transparent to take advantage of the transparency contained in our texture.
  2. Select the little circle to the left of Base Map, as shown in Figure 14.12:
    Figure 14.12 – Changing the material's texture

    Figure 14.12 – Changing the material's texture

    The eagle-eyed among you may have noticed that the shader type in Figure 14.12 is Universal Render Pipeline/Lit. As we've set up our project to use the URP, all materials we create in the project will default to using the URP as their shader. This saves us time as we won't need to update them as we had to do with the turtle's material.

  3. In the window that appears, select the circle texture. If you don't have this option, make sure you've imported the circle texture from the Chapter13/Start folder:
Figure 14.13 – Updating the materials image

Figure 14.13 – Updating the materials image

While we've kept the marker purposefully minimalist, feel free to experiment with different images to change the marker's appearance to suit you. You only need to import a different image and assign it to the material.

With the material complete, let's assign it to the Quad object:

  1. Select the Quad object in the Hierarchy.
  2. In the Inspector, on the Mesh Renderer component, click on the circle next to the currently assigned material, as shown in Figure 14.14.
  3. In the window that appears, select the newly created Marker Material:

Figure 14.14 – Assigning the material to the Quad

Figure 14.14 – Assigning the material to the Quad

You'll notice the appearance of the marker will change to a circle, as shown in Figure 14.13.

Tip

You can also assign the material by dragging it from the Project panel to the Quad object in the Hierarchy.

That's it for the marker's visuals. Feel free to experiment with different textures for the Marker Material to make it fit with your style, before moving onto the next section where we place the marker in the world.

Placing the marker

With that set up, we can write the class that will display the object on the plane. We already have the mechanism to identify valid planes in the environment and extract the position and rotation. We'll take advantage of this code by subscribing to the OnValidPlaneFound and OnValidPlaneNotFound events on FindPlane. To do this, we need to create a new script:

  1. Create a new class called MoveObjectToPlane. This class will move the placement marker to a plane found by FindPlane:

    public class MoveObjectToPlane : MonoBehaviour

    {

        private FindPlane PlaneFinder;

        void Awake()

        {

            PlaneFinder = FindObjectOfType<FindPlane>();    

        }

        void Start()

        {

            DisableObject();    

            PlaneFinder.OnValidPlaneFound += UpdateTransform;

            PlaneFinder.OnValidPlaneNotFound +=          DisableObject;

        }

    }

    The following points summarize the preceding code snippet:

    In the class, we store a reference to FindPlane.

    We disable the object when the scene starts by calling DisableObject. The marker will be re-enabled when a valid plane has been found.

    In the Start function, we subscribe to the OnValidPlaneFound and OnValidPlaneNotFound events by passing a reference to the UpdateTransform and DisableObject functions, respectively. Using += means that we don't overwrite any existing data, so if any other classes have subscribed, we won't remove their subscriptions.

    Whenever FindPlane calls the OnValidPlaneFound event, it will call this object's UpdateTransform function, passing in a PlaneData object. We'll write this function shortly, but it will be responsible for moving the placement marker to the plane.

    Whenever FindPlane calls the OnValidPlaneNotFound event, it will call this object's DisableObject function. This function will disable the object, so if no plane is found, the placement marker will not be shown.

  2. Add an OnDestroy function:

    public class MoveObjectToPlane : MonoBehaviour

    {

        …

        void OnDestroy()

        {

            PlaneFinder.OnValidPlaneFound -= UpdateTransform;

            PlaneFinder.OnValidPlaneNotFound -=           DisableObject;

        }}

    In the OnDestroy function, we remove the references to this object's function to ensure they are not called on a dead object. OnDestroy is called whenever this component or the object it belongs to is in the process of being removed from the scene (or the scene itself has been destroyed).

  3. Finally, we need to add the functions that are called when the two events are raised; that is, UpdateTransform and DisableObject:

    public class MoveObjectToPlane : MonoBehaviour

    {

        …

        private void UpdateTransform(PlaneData Plane)

        {

            gameObject.SetActive(true);

            transform.SetPositionAndRotation(Plane.Position,           Plane.Rotation);

        }

        private void DisableObject()

        {

            gameObject.SetActive(false);

        }

    }

    Both functions are relatively simple, as follows:

    UpdateTransform enables the GameObject in case it was disabled previously. It also sets the position and rotation equal to that of the plane.

    DisableObject disables the GameObject (somewhat unsurprisingly). We do this to prevent the marker from appearing on screen when there is no valid plane.

That's it for the code. Now, we can add the new script to the Placement Marker:

  1. Select the Placement Marker object in the Hierarchy.
  2. In the Inspector, add the MoveObjectToPlane script:
Figure 14.15 – Adding the Move Object To Plane component

Figure 14.15 – Adding the Move Object To Plane component

As you can see from Figure 14.16, if you run the game now, the marker will stick to any surface found in the center of the screen:

Figure 14.16 – The placement marker in action

Figure 14.16 – The placement marker in action

The black lines define the different planes. You can see the planes' boundaries expand as you move around the environment.

Important Note

You'll notice that the marker disappears if you move the view outside of a plane (defined by the black borders in Figure 14.16). In the FindPlane class, we pass the TrackableType of PlaneWithinPolygon to the Raycast function. Try experimenting with different flags to see what effect it has on the placement marker. The different TrackableType varieties can be found at https://docs.unity3d.com/2019.1/Documentation/ScriptReference/Experimental.XR.TrackableType.html.

As you move the device around, you'll notice that the marker also disappears when there isn't a valid surface. We now have a visual indicator of when there is a suitable surface for placing objects.

In this section, you've created your first URP material and assigned it a custom texture. The material was then assigned to a Quad in the scene to represent a placement marker. We then took advantage of the plane detection code we wrote in the Retrieving plane data section to position the marker onto a detected surface using the OnValidPlaneFound and OnValidPlaneNotFound actions.

Now that the player has this visual indication of when a suitable surface is in view, we can write the code that will spawn objects at the marker's location.

Placing objects in the world

We've done most of the groundwork for placing the objects in the world. We already have a method for detecting a plane and providing a visual indicator for the player, so they know what a suitable surface is and, more importantly, what isn't. Now we need to spawn an object when a player taps on the screen and there is a valid plane. To do this, we need to create a new script, as follows:

  1. Create a new script called PlaceObjectOnPlane:

    public class PlaceObjectOnPlane : MonoBehaviour

    {

        public GameObject ObjectToPlace;

        private FindPlane PlaneFinder;

        private PlaneData Plane = null;

        void Awake()

        {

            PlaneFinder = FindObjectOfType<FindPlane>();

        }

        void LateUpdate()

        {

            if (ShouldPlaceObject())

            {

                Instantiate(ObjectToPlace, Plane.Position,              Plane.Rotation);

            }

        }

    }

    The following points summarize the preceding code snippet:

    Similarly to the script we wrote in Placing the marker, we store a reference to the FindPlane component in the scene. We'll use this to subscribe to the OnValidPlaneFound and OnValidPlaneNotFound events.

    In the LateUpdate function, we check if we are able to place an object, and if so, we create it with the position and rotation specified in the Plane member variable. This variable is set whenever a valid plane has been found. We use LateUpdate instead of the Update function because LateUpdate is called after Update so we can be certain that the FindPlane.Update function will have already checked for a plane this frame. If it has found a plane, we can use it on the same frame to generate an object.

  2. Add the ShouldPlaceObject function. This function returns true if we should spawn an object in this frame:

    public class PlaceObjectOnPlane : MonoBehaviour

    {

        …

        private bool ShouldPlaceObject()

        {

            if (Plane != null && Input.touchCount > 0)

            {

                if (Input.GetTouch(0).phase == TouchPhase.              Began)

                {

                    return true;

                }

            }

            return false;

        }

    }

    To spawn an object, we need to meet the following criteria:

    Plane should not be null.

    The player should have just tapped on the screen (note that the touch phase of the event is TouchPhase.Began).

    Tip

    There are several different touch states, including Began, Moved, Stationary, Ended, and Canceled. Most of them are self-explanatory, although Ended and Canceled are worth differentiating. A touch is considered ended when the user lifts their finger from the screen, and it's considered canceled when the system cancels tracking of the touch. A touch can be canceled for several reasons; for example, if a user uses applies more touches than the system can handle, previous touches will be canceled. For more information on the different touch phases, see https://docs.unity3d.com/ScriptReference/TouchPhase.html.

    At the moment, this function will never return true as Plane will always be null. We'll change this now.

  3. Add the OnEnable and OnDisable functions, as follows:

    public class PlaceObjectOnPlane : MonoBehaviour

    {

        …

        void OnEnable()

        {

            PlaneFinder.OnValidPlaneFound += StorePlaneData;

            PlaneFinder.OnValidPlaneNotFound +=          RemovePlaneData;

        }

        void OnDisable()

        {

            PlaneFinder.OnValidPlaneFound -= StorePlaneData;

            PlaneFinder.OnValidPlaneNotFound -=           RemovePlaneData;

        }

    }

    Here, we subscribe to the events in OnEnable and unsubscribe to them in OnDisable. This code has been described in detail in the Placing the marker section, so I won't go into detail here. The StorePlaneData function is called whenever a plane is found, and RemovePlaneData is called every frame when there is no valid plane.

  4. Add the StorePlaneData and RemovePlaneData functions:

    public class PlaceObjectOnPlane : MonoBehaviour

    {

        …

        private void StorePlaneData(PlaneData Plane)

        {

            this.Plane = Plane;

        }

        private void RemovePlaneData()

        {

            Plane = null;

        }

    }

StorePlaneData is called whenever a plane is found. It stores the plane data to be used by the Update function to spawn an object. RemovePlaneData sets Plane to null when there is no valid plane in the device's viewport. By setting it to null here, ShouldPlaceObject will return false until a valid plane is found again and prevent the user from spawning an object in the meantime.

Now we need to add the new script to the scene, back in Unity:

  1. Create a new object called ObjectSpawner.
  2. Add the PlaceObjectOnPlane script to the new object:
    Figure 14.17 – Creating the object spawner

    Figure 14.17 – Creating the object spawner

  3. Assign the Turtle prefab we created in the previous chapter to the Object To Place field:
Figure 14.18 – Adding the Turtle prefab

Figure 14.18 – Adding the Turtle prefab

Run the game now and you will be able to tap the screen to place objects in the world:

Figure 14.19 – Placing turtles in the world

Figure 14.19 – Placing turtles in the world

As you can see from Figure 14.19, you can place the turtle on different levels and also vertical planes, and it will appear with the correct rotation.

That's it for the main functionality for the AR project. While the interaction is rudimentary, it provides everything you need to create complex AR experiences. Before we wrap up, I would like to briefly run through how we can add post-processing effects in the URP. These post-processing effects will modify the visual appearance of the turtle we place in the world. Although we covered post-processing in previous chapters, its implementation is slightly different in the world of the URP, as you will see shortly.

Post-processing in the URP

To refresh your memory, the URP is a Scriptable Render Pipeline developed in-house by Unity. It has been designed to introduce workflows that provide control over how Unity renders a frame without the need to write any code. So far, we've learned how to update materials and enable background drawing for AR using the URP. In this section, we'll take it a step further and add post-processing effects using the URP. To accomplish this, we first need to modify the camera:

  1. Select the AR Camera in the Hierarchy (remember that it's a child object of AR Session Origin).
  2. In the Inspector, under the Rendering heading, tick the Post Processing box:
Figure 14.20 – Enabling Post Processing

Figure 14.20 – Enabling Post Processing

If you remember from Chapter 11, Entering Virtual Reality, we will need both Volume and Post Processing profiles to enable Post Processing. However, we'll create both in a slightly different way:

  1. Add a Volume component to the AR Camera object by selecting Component | Miscellaneous | Volume from the Application menu. This defines the area in which we want the processing to occur:
    Figure 14.21 – Adding a Volume component to the camera

    Figure 14.21 – Adding a Volume component to the camera

  2. In the Volume component's settings, ensure Mode is set to Global, as shown in Figure 14.21. This means the effects will be applied to the whole scene.
  3. Click on the New button to the right of the Profile field to create a new Volume Profile:
    Figure 14.22 – Creating a new Volume Profile

    Figure 14.22 – Creating a new Volume profile

    Now you are free to add custom post-processing effects. By default, no post-processing effects are enabled. Previously, we enabled the effects by selecting the profile in the Project panel; however, we can enable them directly from the Volume component:

  4. On the Volume component, select Add Override | Post-processing | Bloom.
  5. Enable the Threshold override and set its value to 1.
  6. Enable the Intensity override and also set its value to 1:
Figure 14.23 – Adding Bloom post-processing

Figure 14.23 – Adding Bloom post-processing

Tip

If you drag the Turtle prefab to the scene, as shown in Figure 14.23, you can see the effect that different settings have on the scene.

Next, we'll configure the Chromatic Aberration post-processing effect:

  1. In the Volume component's settings, select Add Override | Post-processing | Chromatic Abberation.
  2. Enable the Intensity option and set the value to 0.7:
Figure 14.24 – Adding Chromatic Abberation

Figure 14.24 – Adding Chromatic Abberation

Lastly, we'll configure the Color Adjustments post-processing effect:

  1. In the Volume component's settings, select Add Override | Post-processing | Color Adjustments.
  2. Enable the Hue Shift option and set the value to 30.
  3. Enable the Saturation option and set the value to 70 to create a techno turtle:
Figure 14.25 – Adding Color Adjustments

Figure 14.25 – Adding color adjustments

And that's it for the modifications we'll make to the turtle's visuals in this section. You can see the contrast between the original turtle and the post-processing turtle in Figure 14.26:

Figure 14.26 – Before (left) and after (right) post-processing effects were applied

Figure 14.26 – Before (left) and after (right) post-processing effects were applied

Feel free to play around with the different overrides to see what effects you can produce. For more information on what each effect does, see the online documentation at https://docs.unity3d.com/Manual/PostProcessingOverview.html.

If you run the game on your device, you can spawn the techno turtle into the world yourself:

Figure 14.27 – Techno turtle in action

Figure 14.27 – Techno turtle in action

You will have noticed that your environment has changed appearance as the post-processing effects are applied to everything on screen, not just the turtle.

And that's it for the AR game: you can detect horizontal and vertical planes and spawn the techno turtle in your environment. And you learned how to do all of this in a URP project! In doing so, you've created a solid foundation for an AR game that can be extended in multiple ways. For example, how about creating a table-top fighting game? You will need to create the NPCs in Unity and then use the scripts we wrote here to place them in the real world. The possibilities are (almost) endless!

Summary

Congratulations! By reaching this point, you have completed the AR project and six other projects: first-person 3D games, 2D adventure games, space shooters, AI, machine learning, and virtual reality.

In this project alone, you've learned the foundations of AR development. You now know how to detect planes (and other features) in a real-world environment, extract information from the detected planes, and use it to spawn virtual objects. You've taken advantage of the tools offered by AR Foundation to create an AR game that can be played on Android or iOS. The game is easy to debug, as you can see in real time when Unity has detected a plane, the size of the detected plane, and its location in the world.

You then extended Unity's offerings by writing custom scripts to extract the plane data generated by Unity and make it accessible to any script that subscribes to updates. You designed a placement marker and object spawn script that uses this information to place objects in the environment.

Not only that, but you've also done it using Unity's URP. So on top of the AR knowledge, you now know how to convert materials to use the URP, along with how to implement AR and post-processing effects in the URP. Not bad!

Now is an excellent time to reflect on what you've learned in this book. Take the time to think about what you've read up to this point. Did any chapters stand out to you in particular? Maybe you were excited by working on AI? Or creating a project in virtual reality? I recommend you focus on those projects first. Play around with them, extend things, break things, and then work on fixing them. But whatever you do, have fun!

Test your knowledge

Q1. You can use the … flag to select which objects to detect in the real world.

A. PlaneFlag

B. RaycastHitType

C. TrackableType

D. FeatureFlag

Q2. You can disable the detection of vertical planes using the … component.

A. Plane Manager

B. Plane Detector

C. AR Session

D. AR Session Origin

Q3. RaycastManager is used to cast rays in AR.

A. True

B. False

Q4. A touch is defined as Canceled when which of the following happens?

A. The user removes their finger from the screen.

B. The system cancels the touch.

C. The user double taps the screen.

D. The tap response is used to spawn an object.

Q5. The URP is which of the following?

A. A Unity quality setting

B. A post-processing effect

C. An animation system

D. A Scriptable Render Pipeline

Further reading

For more information on the topics covered in this chapter, see the following links:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset