How To Create a Provisioning Profile For Local iPhone Testing

When you try to deploy your app to your iPhone, you might get the following error:

Code Signing Error: “Unity-iPhone” requires a provisioning profile. Select a provisioning profile for the “Release” build configuration in the project editor.

To resolve this issue, you should create or tell Xcode to automatically create a provisioning profile to obtain the necessary permission to deploy and run an app on iPhone. This tutorial explains how to achieve that.

Note: This tutorial is for deploying and testing your app on your iPhone and has nothing to do with the appstore publishing.

  1. Make sure you do have an Apple Developer Account.
  2. Connect your iPhone to your mac computer.
  3. In your XCode project, select your project in the left pane (see the screenshot below).
  4. In the middle pane, under TARGETS, choose Unity-iPhone.
  5. In the tab section, select Signing & Capabilities.
  6. Select Automatically manage signing
  7. In the provisioning profile, select the item that includes your name plus (Personal Team).
  8. At this step, you should get a pop up on your computer. It asks you to enter your icloud password.
  9. Go to your iPhone. You should also get a prompt there. Click Yes.
  10. On your iPhone, go to Setting > General > Device Management. Click on the Apple Development: … and confirm it (if it’s not confirmed already). By doing this, iPhone allows apps from the developer (yourself) be trusted on this phone and will be trusted until all apps from the developer are deleted.

Now you can deploy your Xcode app into your iPhone.

How to Play a Vologram Video with Vologram Unity Plugin

Introduction

This tutorial shows you how to install and use the Vologram SDK unity plugin to play a Vologram video (using its geometry and mp4 files) in Unity Editor.

Create Geometry Files

When you record a dance, the Volu app record a video clip and upload it to Vologram servers. After several minutes (15 mins as of now), Vologram sends you back 3 files:

a. header.vols (geometry)

b. sequence_0.vols (geometry)

c. video texture file (.mp4 format).

To play a vologram, you need these three files. To get them, connect your phone to your computer. Find the files and copy them to a directory on your computer. I put them in a folder called geometry-files/lili-geometry inside my Unity project folder.

Geometry Files created by Volu (The Vologram App)

Setup Vologram Unity Plugin

  1. Install the vologram unity plugin.
    1. Get the plugin from here.
    1. To import it into unity, go to Asset > Import Package > Custom Package. Choose the plugin file and import it.
  2. Create an empty GameObject. I called it ‘Vologram’. Attach the script “VolPlayer” to it. This script loads the geometry files and the related .mp4 file to play the video.
  3. This script has several properties to set up.
    a. Set the Path property of the Vol Folder section to the folder containing the geometry files (see the figure below).
    b. Set the Path property of the Video Texture section to the .mp4 file. You can use the Open New Video File button to set the path.

After we play the scene, we would see:

The Unity Editor is playing the vologram of my daughter Lili while dancing.

How to Use Oculus Voice SDK

Overview

In this tutorial, we’ll build a simple app that lets users activate Voice commands by gazing at a sphere.

The app we are going to make is composed of two apps that communicate with each other:

1. Server side (Wit app)

2. Client side (Unity app).

Let’s start by creating the Wit app.

Create Wit App

To create the Wit app, sign up for a  Wit.ai account. Then, follow these steps:

  1. On the Understanding tab, enter ‘make the cube green’ in the Utterance field.
  2. On the Intent field, enter change_color and click on the Create Intent button.
  1. In the Utterance field, highlight (or double-click on) “cube” and then enter shape in the Entity for “cube” field. Click + Create Entity.

For more information, see Which entity should I use? in the Wit.ai documentation.

  1. Highlight ‘green’ and create a new entity and call it color. Now you should see something like this:
5. Click Train and Validate to train your app.
6. Repeat steps 4 through 6 with other possible utterances a user might say, such as Set the sphere to red, Make the cylinder green, Color the cube in Orange and so on.
TIP: After training, the app will start to identify entities on its own. However, it can sometimes make mistakes, especially initially. If this is an issue, try training several phrases and then tweaking the NLU’s mistakes along the way. Highlight the word that should be matched and set the correct entity. You can then click the X next to the incorrect entities to remove them.

On the Entities tab, verify that the following entities are present:

Now we are ready to make our Unity app.

Create Unity App

  1. Create a unity project using 3D Core template. You can call it Gaze_Tutorial.
  2. Go to File > Build Settings…. In the Platform panel, select Android. Click on Switch Platform. It might take several minutes for Unity to compile scripts and switch to the new platform.

Connect the Unity App to Your Wit App

  1. Import Voice SDK into Unity Editor. The Voice SDK unity package is included in Oculus Integration SDK. Download and import it into your newly created Unity project (Assets > Import Package > Custom Package…). After the import, you should be able to see the Oculus in the menu. We will use this menu later.
  1. Go back to your Wit app on Wit.ai website. From the Settings tab under Management, copy the Server Access Token.

3. In the Unity Editor, Click Oculus > Voice SDK > Settings and paste the Server Access Token into the Wit Configuration box.

Note: If you don’t see the Oculus menu, it means you have not installed the Oculus Integration SDK for this project. Go here for the installation instructions.

4. Click Link/Relink to link your Unity app with your Wit app.

5. Save a new Wit Configuration for your app by clicking on the Create button. Name the configuration file WitConfig-Gaze.

Test

Let’s test to see whether the Wit configuration file that we created works properly. We are going to send voice commands and expect to receive parsed data back. One easy way to achieve that is to use the Understanding Viewer window.

  1. Select Oculus > Voice SDK > Understanding Viewer.
  2. Make sure the newly created Wit configuration (WitConfig-Gaze) is set.
  3. Enter ‘Set the cylinder to green’ in the utterance field, and then click Send.

Now you should see the structured response (see the below figure).

 It has these sections:

  1. The text field. It contains the transcription of the utterance you sent to the Wit app. In our case, it’s ‘Set the cylinder to green’.
  2. The entities field. If the utterance was parsed successfully, this field contains the entities in your sentence. We expect to see ‘color’ and ‘shape’ entities here. If you open them, you eventually should find ‘cylinder’ and ‘green’ values.
  3. The intents field. We expect to see change_color there.
  4. The traits field. We do not expect to see any trait here because we did not define any (nor our utterance has any).

If you can see all the correct elements in the response above, the test is passed. You are good to go to the next section.

Setting Up the Scene

  1. In a new Unity 3D project (a 3D project), right-click the scene in the Hierarchy window and then select GameObject > 3D Objects > Cube.
  2. Select the added cube and go to the Inspector window and set Position X to -2.5. This will move the cube over to make room for other shapes.
  3. Repeat the steps above to add a sphere, a capsule, and a cylinder. Set their Position X to -0.75, 0.75 and 2.5 respectively.
  4. (Optional) Make all the shapes you’ve created black. You can achieve that by
  5. Right click on the Hierarchy window and select Create Empty and name it Shapes to group the shapes together.
  6. Select the four shapes and drag them into the Shapes GameObject (See the figure below).
  1. While the Shapes GameObject is selected, go to the inspector window and change
    • Position to (X = 0, Y = 1.5, Z = 3)
    • Scale to (X = 0.5, Y = 0.5, Z= 0.5)

Add VR To Scene

  1. Remove the Main Camera from your scene.
  2. Bring the OVRCameraRig prefab to the hierarchy. (You can search it in the project window or in the folder Oculus\VR\Prefabs).
  3. Go to Edit > Project Settings… > XR Plugin Management and click on Install Plugin Management button. After the installation, you should be able to see an Oculus checkbox. Click on it to install the Oculus package. It would take several minutes to finish the installation (See the figure below).

Test

Let’s test whether our app compiles without an error and runs in Virtual Reality. We expect to be able to look around in VR and see the shapes we added.

  1. Turn on your Oculus HMD and connect it to your computer.
  2. Wear the HMD. If you see the USB Debugging Prompt, click Allow (See the figure below).
  1. In the Unity Editor, go to File > Build Settings….
  2. Make sure ‘Android’ is selected as the target platform (see the figure below).
  3. Click on Refresh. Select Oculus Quest 2.
  4. Click on Build and Run.
    Note: If you get an error message saying Android Device Is Not Responding, it might have a USB Debugging Permission issue. Wear HMD and click Allow if you see a prompt window.
  5. Wear HMD. You should be able to see the shapes (cube, sphere, etc.).

Add UI Elements

  1. In Unity, right-click on the Hierarchy window and select UI > Canvas. Call it World-Space Canvas. Change the Render Mode of the canvas to World Space. Now, another property named Event Camera should appear below it. Drag the OVRCameraRig TrackingSpace CenterEyeAnchor into the Event Camera slot.
  1. Set the position of the World-Space Canvas to Pos X = 0.5, Pos Y = 2.5 and Pos Z = 3.5. Set its Scale property to X = 0.003, Y = 0.003, Z = 0.003.
  2. Right-click on the World-Space Canvas GameObject and then select Text – TextMeshPro. Name it Instruction Text (TMP). In its Text property, enter the following:
    Look at the white sphere, wait for “Listening…” to appear, and say “make the capsule orange”.
    • Set the Vertex Color to black and make itbold (Press B on the Font Style property)for easy reading.
    • Set the Rect TransformWidth property to 600 and Height to 200.
    • On the TextMeshPro – Text component, check the Auto Size property.
  3. Create another Canvas. Name it Screen-Space Canvas. Change its Render Mode to Screen Space – Camera. Set the Render Camera property to CenterEyeAnchor.
  4. Right-click on the Screen-Space Canvas and add an Image UI element. Name it Reticle Image. Set its Source Image property to GazeRing by dragging Oculus/VR/Textures/GazeRing.png into the slot.
    Note: If the slot refuses GazeRing image, it means the GazeRing is not imported as a Sprite. Thus, click on the GazeRing and in the inspector, set Texture Type property to Sprite (2D and UI). Now you should be able to drag the GazeRing to the Source Image property of Reticle.
  5. (Optional) Add a Directional Light to illuminate the objects. To do so, point the z-axis of the light toward the shapes.

The next important UI element is Gaze. We explain it in the next section.

Add Gaze Capability

In this section, we implement the gaze mechanism.

  1. Add a Sphere GameObject. Name it Gaze. Set its position to:
    Pos X = -1, Pos Y = 2.5, Pos Z = 3.5
  2. Find the scripts InteractionVisualizer.cs and GazeActivator.cs and attach them to the Gaze GameObject. If you cannot find them, you can recreate them using the following code:
// InteractionVisualizer.cs
/************************************************************************************
Licensed under the Oculus SDK Version 3.5 (the "License"); 
you may not use the Oculus SDK except in compliance with the License, 
which is provided at the time of installation or download, or which 
otherwise accompanies this software in either electronic or hard copy form.

You may obtain a copy of the License at

https://developer.oculus.com/licenses/sdk-3.5/

Unless required by applicable law or agreed to in writing, the Oculus SDK 
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
************************************************************************************/
using System.Collections;
using System.Collections.Generic;
using TMPro;
using UnityEngine;
using UnityEngine.UIElements;

namespace Oculus.Voice.Samples.XR.GazeActivation
{
    public class InteractionVisualizer : MonoBehaviour
    {
        [SerializeField] private TextMeshProUGUI text;
        private Material material;
        private bool active;

        // Start is called before the first frame update
        void Start()
        {
            material = GetComponent<Renderer>().material;
        }

        public void SetFocusedColor()
        {
            if (!active)
            {
                material.color = Color.blue;
            }
        }

        public void SetUnfocusedColor()
        {
            if (!active)
            {
                material.color = Color.white;
            }
        }

        public void OnStartedListening()
        {
            active = true;
            material.color = Color.red;
            if (text)
            {
                text.color = Color.green;
                text.text = "Listening...";
            }
        }

        public void OnStoppedListening()
        {
            transform.localScale = Vector3.one;
            active = true;
            material.color = Color.blue;
            if (text)
            {
                text.color = Color.white;
                if (text.text != "Listening...")
                {
                    text.text = "Processing...\nYou said: " + text.text;
                }
                else
                {
                    text.text = "Processing...";
                }
            }
        }

        public void SetInactive()
        {
            active = false;
            material.color = Color.white;
        }

        public void SetScale(float modifier)
        {
            transform.localScale = Vector3.one * (1 + .5f * modifier);
        }

        public void OnError(string type, string message)
        {
            if (text)
            {
                text.color = Color.red;
                text.text = "Error: " + type + "\n" + message;
            }
        }

        public void OnTranscription(string transcription)
        {
            if (text)
            {
                text.color = Color.white;
                text.text = transcription;
            }
        }
    }
}

And the code for GazeActivator.cs script:

// GazeActivator.cs
/************************************************************************************
Licensed under the Oculus SDK Version 3.5 (the "License"); 
you may not use the Oculus SDK except in compliance with the License, 
which is provided at the time of installation or download, or which 
otherwise accompanies this software in either electronic or hard copy form.

You may obtain a copy of the License at

https://developer.oculus.com/licenses/sdk-3.5/

Unless required by applicable law or agreed to in writing, the Oculus SDK 
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
************************************************************************************/
using UnityEngine;
using UnityEngine.Events;

namespace Oculus.Voice.Samples.XR.GazeActivation
{
    public class GazeActivator : MonoBehaviour
    {
        [SerializeField] private float activationTime = 2;

        [SerializeField] private UnityEvent onGazeStart = new UnityEvent();
        [SerializeField] private UnityEvent onGazeEnd = new UnityEvent();

        [SerializeField] private UnityEvent onActivation = new UnityEvent();

        private Camera gazeCamera;
        private bool gazing = false;

        private bool activated;
        private float gazeStart;

        private void Awake()
        {
            gazeCamera = Camera.main;
        }

        private void Update()
        {
            if (Physics.Raycast(gazeCamera.transform.position, gazeCamera.transform.forward,
                out var hit) && hit.collider.gameObject == gameObject)
            {
                if (!gazing)
                {
                    gazeStart = Time.time;
                    onGazeStart.Invoke();
                }

                gazing = true;
            }
            else if (gazing)
            {
                activated = false;
                gazing = false;
                onGazeEnd.Invoke();
            }

            if (gazing && Time.time - gazeStart > activationTime && !activated)
            {
                activated = true;
                onActivation.Invoke();
            }
        }
    }
}
  1. Drag and drop the Instruction Text (TMP) into the text slot of the “Interaction Visualizer” script.
  2. Find Gaze Activator component. It has several properties to be set,
    • Set Activation Time to 2.
    • Set OnGazeStart() to Gaze > InteractionVisualizer.SetFocusedColor
    • Set OnGazeEnd() to Gaze >  InteractionVisualizer.SetUnfocusedColor
    • Set OnActivation() to AppVoiceExperience > AppVoiceExperience.Activate

See the figure below:

Add App Voice Experience to the Scene

If you want your Unity app to send commands to the Wit.ai server and receive the results back, you’ll need to add an App Voice Experience GameObject to your scene.

  1. Click Assets > Create > Voice SDK > Add App Voice Experience to Scene and select the App Voice Experience GameObject.
  1. Drag the wit configuration file WitConfig-Gaze into its slot in the App Voice Experience component on App Voice Experience GameObject.
  1. Click on Events dropdown on App Voice Experience component.
    1. Set On Response (WitResponseNode) to Gaze > InteractionVisualizer.SetInactive
    2. Set On Error (String, String) to Gaze > InteractionVisualizer.SetOnError
    3. Set On Mic Level Changed (Single) to Gaze > InteractionVisualizer.SetScale
    4. Set On Start Listening () to Gaze > InteractionVisualizer.OnStartedListening
    5. Set On Stopped Listening () to Gaze > InteractionVisualizer.OnStoppedListening
    6. Set On Partial Transcription (String) to Gaze > IntractionVisualizer.OnTranscription
    7. Set On Full Transcription (String) to Gaze > InteractionVisualizer.OnTranscription

Add a Response Handler for Voice Commands

When a user speaks a command, the Voice SDK will send the utterance to the Wit API to do NLU processing. After the processing is complete, it will send back a structured response containing the extracted intent, entities and traits (if any).

One common way to extract the necessary information (e.g. capsule, green, etc.) from the wit response is to use the WitResponseMatcher script. Although you can attach it to any GameObject and extract the necessary fields, there is a way to set up this script automatically to extract those fields. The following explains how:

  1. In the Unity Editor, select the App Voice Experience GameObject in the Hierarchy window.
  2. Create an Empty GameObject as a child of App Voice Experience. Name it Color Response Handler.
  3. Click Oculus > Voice SDK > Understanding Viewer.
  4. Set the Wit Configuration field to WitConfig-gaze (or whatever config you created for this project).
  5. Enter “Make the capsule green” in the Utterance field. Click send. You should get the response shortly.
  6. In the Hierarchy window, make sure the Color Response Handler GameObject is selected.
  7. On the Understanding Viewer window, go to the entities > shape:shape > 0 > value = cube. Click on it. On the popup window, select Add response matcher to Color Response Handler (see the figure below).
  1. Verify Unity has added the response matcher component to the Color Response Handler GameObject:
  1. To extract the shape’s color from the response, go to the entities > color:color > 0 and click value = green.(See the figure below)
  1. In the window, we have two options:
    1. Add response matcher to Color Response Handler
      This option adds a new Wit Response Matcher component to the Color Response Handler GameObject. This approach is not desirable because we end up having two separate response matchers, one for shape and another for color. As we need to use both parameters together to set the shape to the requested color, we’d be better off to use a response matcher that extracts both parameters at once.
    2. Add value matcher to Color Response Handler’s Response Matcher
      This option modifies the current response matcher to extract both shape and color parameters at once. Then we can directly set the shape to the requested color. Therefore, select Add value matcher to Color Response Handler’s Response Matcher option.
  2. Verify that the new value matcher for Color is added to the Wit Response Matcher (see figure below).
  1. In the Hierarchy window, select the Shapes GameObject. In the Inspector window, click Add Component. Select New Script and name the new script ColorChanger. Add the following code to the script:
// ColorChanger.cs
/************************************************************************************
Licensed under the Oculus SDK Version 3.5 (the "License"); 
you may not use the Oculus SDK except in compliance with the License, 
which is provided at the time of installation or download, or which 
otherwise accompanies this software in either electronic or hard copy form.

You may obtain a copy of the License at

https://developer.oculus.com/licenses/sdk-3.5/

Unless required by applicable law or agreed to in writing, the Oculus SDK 
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
************************************************************************************/
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class ColorChanger : MonoBehaviour
{
    private void SetColor(Transform trans, Color color)
    {
        trans.GetComponent<Renderer>().material.color = color;
    }

    public void UpdateColor(string[] values)
    {
        // Note: The 'values' array contains color and shape but their order depends on
        //       the value matchers on the Wit Response Matcher component on the
        //       Color Response Handler GameObject.
        var shapeString = values[0];
        var colorString = values[1];

        if (!ColorUtility.TryParseHtmlString(colorString, out var color)) return;
        if (string.IsNullOrEmpty(shapeString)) return;

        foreach (Transform child in transform)
        {
            if (child.name.IndexOf(shapeString, StringComparison.OrdinalIgnoreCase) != -1)
            {
                SetColor(child, color);
                return;
            }
        }
    }

}
  1. In the Hierarchy window, under App Voice Experience,select Color Response Handler GameObject. Open the Wit Response Matcher (Script) window. Click + under On Multi Value Event(String []), and then dragthe GameObject Shapes into its slot. On the function dropdown, select ColorChanger.UpdateColor(). (See the figure below).

Test the Integration

  1. Run your app by pressing Play.
  2. Click Oculus > Voice SDK > Understanding Viewer to open the viewer.
  3. In the Utterance field, type “Make the cylinder orange”. Click Send.
  4. You should see the cylinder turn orange.
  5. Make sure the Oculus HMD is connected to your computer.
  6. In the Unity Editor, go to File > Build Settings… .
  7. Make sure ‘Android’ is selected as the target platform.
  8. Click on Refresh. Select Oculus Quest 2.
  9. Click on Build and Run.
    Note: If you get an error message saying Android Device Is Not Responding, it might have a USB Debugging Permission issue. Wear HMD and click Allow if you see a prompt window.
  10. Wear the HMD. You should be able to see all the UI elements we added.
  11. Gaze at the white sphere, wait for 2 seconds and say Make the capsule green. The capsule should turn green shortly.

Note: I have not coded InteractionVisualizer.cs, the GazeActivator.cs or other scripts here. You can find them in the Oculus Voice SDK sample folder.

Mechanical Typewriter To Escape From Distractions?

Assume you have a magical cold foamy beer at your elbow all the time, and it never finishes. You decide to quit alcohol, but the refreshing, bubbling golden beer is always at your reach. You cannot help but keep drinking.

That describes my relationship with web surfing. I spent a shameful amount of time reading articles and blog posts about programming, news, and politics. There is an infinite amount of entertainment online. Evolutionary speaking, the human species never had free and unlimited access to entertainment. This endless, omnipresent entertainment is a disaster for productivity.

When it comes to deep work, I need to focus on the task for a long time. I tried various techniques, from using sheer willpower to apps like the freedom app to avoid distractions. I had no lasting success. My brain always found a way to justify the distraction. My productivity level went down to the bottom. I felt so miserable.

So I decided to read on paper instead of a digital device. I went even further. I decided to buy a mechanical typewriter. Why? Because I could not do anything else than writing with it. Very focused.

There was a problem, though. First, I am a programmer, and I need to run the programs I write. Also, mechanical typewriters put lots of strain on my eyes and backs. So I wondered: can I find or make a barebone computer that is not capable to connect to the internet? You may say why not just disable your laptop WIFI?! The temptation to check the news or something will tempt me to turn on the WIFI and stick the hose of the internet into my brain.

Long story short, I bought a Raspberry Pi and installed Ubuntu for Raspberry Pi on it. Then after installing the software packages I wanted, I uninstalled its only internet browser, Firefox. I connected it to my mechanical keyboard and a large monitor. It’s now like a mechanical typewriter! The whole setup costs me $100 (excluding the keyboard and monitor because I already had them).

My new setup is still connected to the internet, but it does not have a browser. No web surfing anymore. As I still have internet access on Raspberry Pi, I can push my work to Github using the command line. I have created an email account specifically to send or receive articles or ebooks between my Raspberry Pi and my usual computer. I don’t use that email for anything else.

You can go even deeper than me and install the Ubuntu Server on the Raspberry Pi. The Ubuntu server does not have any graphical user interface. You’d have to use command line and write your books using nano, emacs, or vim text editors. This is closer to a mechanical typewriter. 

How I Passed the CompTIA CTT+ Video Submission Exam

To obtain a CompTIA Certified Technical Trainer (CTT+) certification, one should pass two exams

  1. Essential exam (CTT+ Essentials – TK0-201)
  2. Video exam (officially called: CTT+ Classroom Performance Based Exam – TK0-202)

I passed the first exam before, and here I want to explain how I passed the second exam (video exam). As you may know, the video exam is basically recording yourself and your class for about 20 mins while you are teaching them. Then you submit the video with a form (you have to fill it carefully) to CompTIA (through a different website than the original CompTIA website).

The hard things about this 20-minute video (actually it’s 18 to 22 mins to be more specific) are:

  1. You must gather five or more people for your class. This step was easy for me because I already had a class for a semester. I was teaching them C# with Unity.
  2. Deliver all the requirements for the video session within 22 minutes. That includes telling your students the learning objectives, explaining a concept, asking them lots of small questions (this is particularly important), giving them an exercise, solving it, answering student’s questions, and recapping the materials all within 22 mins. This was the hardest part. My video was 21:56 minutes!
  3. You cannot edit your video! (except a single time and it’s only and only to allow learners sufficient time to practice a new skill).

Getting Started

To start preparation for your recording day, read the form (called Classroom Trainer Exam). It’s an MS word document that contains questions and spaces for you to answer them. You don’t need to answer the question at this stage but read them to know what points the judges are looking for. For example, one of the questions goes like this:

What are the learning objectives for this module, as stated in the recorded performance? The response to this question provides evidence related to “Planning Prior to the Course.”

From this question, you get the idea that the judges expect you to let the learners know about this objective at the beginning of your learning session (i.e., your video). You can use slides, write them on the whiteboard, or simply tell your learners the objectives.

After you are done reading the form, start “designing” your video session. That means deciding over the concept you want to teach and writing down what you are going through during this 20 mins. I chose to teach C# “Switch Statement” concept to my class, and I spent several hours to design my session (the high-level step-by-step breakdown of my training session for video submission):

  1. Introduce yourself.
  2. Learning Goals (show on a slide and talk about them quickly).
  3. What will be covered in the next 20 mins?
  4. Explain the pain of if-else-if (by using the example code). // This is the motivation part.
  5. Introduce Switch statement and refactor the previous code to use the switch statement (instead of if-else-if)
    1. While refactoring, ask students where they have seen ‘break’ before? (loops, jump out of the loop).
    1. Explain “default” is for the last else statement.
    1. Take out one of the “break” statement and re-run the program. Show its effect.
  6. Question?
  7. Exercise (Kafka Coffee Shop). Group people (a table à a group).
  8. Solve the exercise by live coding (if time allows or show the solution, if not).
  9. Debugging
    1. Make sure you have break or return in each case (if it’s not a fall-through case)
    1. Make sure your switch variable type is an integral type (so you cannot use double, float, etc).
    1. Although optional, it’s a good practice to use the ‘default’ section.
  10. Recap
    1. Using long if-else-if is hard to read and painful.
    1. The remedy is the switch statement.
    1. In each ‘case’ statement, you should have either break or return.

Then I imagined myself teaching the class and wrote down everything I would say or do during the session. It was like writing a screenplay. I was writing not just what I wanted to say but how to say it (intonation), where to place the camera, when to show visual studio code editor, or even my body movement during the lecture! (to capture student’s attention). There was lots of editing and back and forth while writing it. I was standing and rehearsing the paragraphs to see whether they would be crystal clear to the learners. After I was relatively happy with the overall screenplay, I started recording myself, delivering the entire screenplay at once. My performance required live coding (writing C# code in visual studio) in front of the learners. So I went through that too. The first time I finished my performance, I looked at the timer and was shocked! It was way above 22 mins. So, I cut down the learning materials and their scope significantly. I made the code examples to-the-point and brief. I practiced many times till I mastered the delivery and got the timing down to 18 mins.

I thought I’m done, but not really. Another big step was to figure out the best tool to record the video.

What Camera to Use for Video Recording?

The next step was to figure out which technology is the best to record the session. I had no knowledge or skill in video recording. The first thing that came to my mind was to ask a friend to lend me his consumer-grade photo camera. However, it turned out to be an awful decision. You know why? Because the auto-focus feature of the photo camera made a continuous loud noise! When I turned the feature off, the whole screen got blurry. So, I gave the camera back to my friend. My second option was to rent a professional video camera (they have auto-focus feature but without noise). It was too expensive, so I didn’t. I decided to use my smartphone instead! Its video was clear and had no auto-focus noise at all! The sound quality was kind of OK provided that I talk loudly and turn off the ventilation system of the class. I made sure that I can record at least 40 mins of HD video with that (some smartphone models do not allow you do record more than 30 mins). Why 40 mins? Because the whole session might take more than 22 mins and then you would cut the group practice part out to shorten it to 22 mins or less.

I told my students about the recording session in advance and got their consent. I also found a volunteer to hold the camera (you can use a tripod).

Note: You must give the consent form to students to sign. Judges want to see that.

On my big day! (the video recording day) I asked the cameraman to sit in the front-left corner of the class while I was mostly standing in the front-center close to him (so my voice be clear to the camera).

Note: According to CompTIA, the ideal scenario is that the instructor to have her station (desk and laptop) at the front-left corner of the class and the video recorder to be in the front-right corner. The whiteboard or slides should be clear to judges when it’s needed. Also, CompTIA judges expect the instructor to move in the class and does not ‘hide’ in her ‘bunker.’

Note: in the video, you should display the students at least once so the judges can see them and the learning environment).

I asked the cameraman to loosely follow my upper body and show the whiteboard/slides clearly when needed. With my command, he started recording. I started talking by introducing myself, then explaining the learning materials and continued exactly according to the plan. Everything went smoothly. No hiccups. No diversion. No technical issues. The video length was 30 mins and 18 seconds, including the practice time.

During my performance, I asked several small questions, but I should have done it even more to achieve a higher score.

Note:  Judges would fail you if you don’t ask these small quick questions. They expect your training session to be quite interactive. You should not lecture the students. If you talk nonstop for 5 mins without asking any question, you’ll probably fail.

When I got home, I copied the video from my smartphone to my PC and played it. The sound volume was too low all along, and the quality turned out to be horrible at some points during the recording. You could hardly hear me. There was lots of background noise, although I turned off the ventilator. I was quite worried.

Software to Edit and Enhance the Audio Quality

After searching and trying several applications, I managed to edit and enhanced the audio quality with Adobe Rush CC. It was easy for a busy and amature person like me. Its surprising feature was its audio enhancing capabilities. It dramatically increased the volume and magically removed the background and artifact noises. The whole experience with Adobe Rush was point-and-click. I guess they call it adobe ‘Rush’ because it’s for people in a rush who don’t have time to deal with gazillion buttons, menus, and options. I only used its free trial version. It still allowed me to export the video to mp4 (you can only export three times with the trial version.) You can see the submitted video here.

How To Fill The Classroom Trainer Exam Form?

Be careful! Your score is not just about the video! The CompTIA judges read your answers in the form carefully and will fail you if you have written it poorly.

The following is the completed ‘Form C’ I submitted to CompTIA (I have removed my personal info from it.) The words in bold are my answers.

Form C: Submission Documentation

Planning Prior to the Course

2.    What are the learning objectives for this module, as stated on the recorded performance?
The response to this question provides evidence related to “Planning Prior to the Course.” (SubDomain 1A)

– Describing Switch Statement
– Identifying scenario(s) to use switch statement (Why to use it).
– Create and Implement switch statement.
– Avoiding common mistakes in using switch statement.
              

3.    What are the relevant characteristics of the learners including their level of expertise in the content area? (The response to this question provides evidence related to “Planning Prior to the Course.” (SubDomain 1A)

The students had various backgrounds. Most of them were working in retail stores (Walmart, Winners, etc.) with no technical skills/knowledge. Some were unemployed. Few of them had bachelor’s degree in art and management. Two students had PhD degrees in science and engineering but were unemployed. I had been with these students for more than two months before I video-recorded this session. About half of the students had kids to take care of and were very busy outside of the class. The other half had lots of time to practice. The students were between 25 to 40 years old. I consider this information before preparing/adapting the course materials.

4.    Specifically, how did you identify these characteristics? How did you gather the information? The response to this question provides evidence related to “Planning Prior to the Course.” (SubDomain 1A)

I had two sources of information to gather these characteristics. The first source was from the questionnaire before starting the course. I had no direct access to the questionnaire, but my manager informed me about the demographics of the students. The second source was talking informally with students before and after initial sessions of the semester, especially the very first session.

5.    What did you do to prepare for training this particular group of learners for this specific recorded session? If you adapted the material or made adjustments, explain what you did and why.  If you did not need to adapt it, explain why it was not necessary. NOTE: Your response to this question must match what scoring judges observe on the recording. The response to this question provides evidence related to “Planning Prior to the Course.” (SubDomain 1A)

Knowing about their background/demographic, I avoided jargon and simplified my terminology. I also came up with examples that are familiar to learners from all walks of life. Examples and exercises that they can understand. In the recorded session, you can see I used “Suri Sushi Restaurant” example for switch statement. Also, for the in-class exercise, I used “Kafka Coffee Shop.” If the audience were well-educated in computer science, I would use a different example (I could use computer networking-related examples). I also perform another adaption. I changed the presentation of the course material such that it became feasible to present it in about 20 minutes. After the recording was done, I continued my lecture and gave students additional exercises and concepts. But the 22 minutes recorded session was self-contained and complete.  

6.   Describe what you did to organize the class particularly as it relates to the portion of the course shown on the recording. NOTE: Your response to this question must match what scoring judges observe on the recording. The response to this question provides evidence related to “Planning Prior to the Course.” (SubDomain 1B)

I asked the learners to move to different tables, sit beside each other and form groups before the session starts. Each table formed a group. You can see that on the recording when it comes to “Kafka Coffee Shop” exercise, I emphasized to the learners that each table is considered as a group and they want to help each other to solve the exercise. Also, like always, I set up the projector and my laptop. Also made sure all software (such as Microsoft Visual Studio, PowerPoint) involved in my lecture would work properly.   

7.   What might the learners have expected based on the pre-course announcement? How did you confirm what their expectations were and what did you do to meet them? NOTE: Your response to this question must match what scoring judges observe on the recording. (The response to this question provides evidence related to “Planning Prior to the Course.” (SubDomain 1B)

A couple of days before the session, I reminded the learners the learning objectives of this specific session on the online forum. As I have been their teacher for more than two months, I had a thorough understanding of their knowledge level and possible difficult areas. When preparing the material for the session, I made sure the learners have all the prerequisite knowledge. I made sure the examples are appropriate and understandable for the learners according to their demographic. At the beginning of the session, I explained in detail the learning objective – both verbal and visual (on the slide). After introducing the new concept, I connected it to the previously-learned materials. I implanted an exercise after teaching the new concepts so they can practice them. During the lecture, I repeatedly provoked/encouraged them the opportunity to ask about the subject.

8.   If this 20–minute segment is part of a longer course, how does it fit into the larger context of the training course? 

Yes, this 20–minute segment was part of a 3-month course. There have been two months of the class before the recording happened. Each class session lasts 2:30 hours, from 6 pm to 8:30 pm. There are three sessions per week, and as I said, the whole course lasted for three months. In the recording, it can be seen that I have connected one concept (switch statement) to a concept that I taught previously (if-else statement). In the exercise I gave to the groups, they were using previously learned materials in conjunction with the new material I taught in the session.

9.   If you have stopped the recording indicate the reason for the stop. (See the How to Prepare guide for the rules about stopping the recording.) Be sure to explain what activities occurred during the time the recording is stopped. NOTE: A portion of the activity must be visible on the recording in order for the scoring judges to consider it as part of this performance assessment.

Yes. I have stopped the recording because I gave students an exercise on the newly learned concept to solidify it. Students spent about 8 minutes on the exercise. As you can see in the video, I started walking through tables to give hints/help students with the exercise, check their progress, answer their questions and to see where possible issues and misunderstandings could be.    

I Interviewed a Job Candidate and It went Sad

Today I was sitting at my desk coding, and suddenly, I was told a candidate is coming for an interview in an hour for a QA job position. The ideal candidate would be a person who knows oil and gas drilling and can write code (mostly unit testing and also being able to fix small issues). I was told that it was his first and last technical and behavioral interview, and we would decide upon it. I was given his resume. He was supposed to work partly under my supervision (for the programming part). His resume showed he is an experienced mechanical engineer trying to get into programming. I admired him for doing so (as I always encourage unemployed or underemployed people to learn coding to find a job or get out of their low-paying and sometimes soul-sucking jobs).

There was a problem, though. He was coming in less than an hour, and I was not prepared. It was also my first time performing an interview. My company did not give me any questionnaire or form to go through with the candidate to evaluate him. I wondered how I can assess him properly? How can I verify his claims? His resume was long and detailed. His last mechanical engineering job was ended in March 2015.  He then founded a software company. His resume described it as follows:

Established classification and forecast models, automated processes, text mining, sentiment analysis, statistical modeling, risk analysis, platform integrations, optimization models, models to increase user experience, A/B testing using R & Python, unit tests & debugging.

This sounded fishy to me. To me, it appeared that he was actually unemployed. It’s less likely for someone to suddenly being able to perform all of these tasks professionally without prior job experience, Or he was very smart. I was doubtful, but again, how can I verify these claims having less than 30 mins? I decided to ask him some fundamental software development questions and then ask him to write code in front of me. In addition, I verified his P.Eng. certification on the APEGA website that means he was actually a mechanical engineer and had at least 4 years of experience.

Then he arrived at our office. When I saw him, I got surprised, sad, then hopeful. He was a man in his 60s. He was unemployed for more than 4 years and nervously looking for a job. His resume said he should be a guy in his 40s. His resume also said that he got coding classes at SAIT college, hoping that he gets into the IT industry. Seeing him trying to reinvent himself and learn programming at old age was truly admiring. He was practicing what I was preaching to people.

He was wearing a nice suit and very thick eyeglasses. My colleague explained what we are doing at our company and what our products are. Then I started asking him short questions:

  1. Your resume shows you have written unit tests before. What is unit testing? Can you name which framework(s) you used? (failed to answer)
  2. What is a singleton? (was unable to respond)
  3. What is a finite state machine? (was unable to respond)
  4. Reference vs. value type? (answered to some extent after I helped him)
  5. What are Start() and Update() methods in the Unity game engine? (answered correctly).

He was rambling. He was not an articulate person at all. The atmosphere in the meeting got intense. Although he failed most of the questions, I proceeded by asking him to solve a coding challenge. Why? Because if he could write this simple program, I could teach him the missing skills at the job. So decided to give him the following programming challenge:

Write a program that tells you whether a word is a palindrome or not. Assume all letters are lower-case.
A ‘palindrome’ name is a name that reads the same backward as forward, e.g., “hannah” or “kayak”.
Example
Input: kayak
Output: It is a palindrome.
–Example 2–
Input: kamran
Output: it is not a palindrome.

I gave him my computer and the Visual Studio having a C# ‘Hello World’ program open. After 15 minutes and lots of help, he wrote this:

static void Main(string[] args)
{
// Write your code here.

string word; // <– I told him ‘word’ should be a string and not an ‘int’!
word = Console.ReadLine(); // <—- I helped him to write this.

word =[]
if
{
word[0] == word[4]
}

else if
{
word[2] == word[3]

}

}

While struggling with writing the above code, I kindly told him it is OK to use any programming language he might be comfortable with, such as Visual Basic. He said no and continued in vain.

At this moment, it was apparent he was blatantly lying about his knowledge and his experience with programming. He could not even write a simple loop but claimed to write “automated processes, text mining, sentiment analysis, statistical modeling, ASP.Net, etc.” Are you kidding me?

It was a sad interview because I wanted him to succeed. He was about to become a prime example of what I was preaching to people who lost their job after the oil crash in Calgary.

How I Passed the CompTIA CTT+ Essentials Exam

My Background

I had been very busy during my studies for the CompTIA CTT+ Essentials exam (TK0-201). I had a 9-to-5 job as a software team lead and a part-time evening teaching job (teaching Unity game development) about 9 hours a week. I have a 4-year-old daughter too. Whew! You can imagine how tight was my schedule.

I decided to study for this certification because it was a requirement for becoming a Unity Certified Instructor. Why do I want to get that certification? Because I love teaching and having that cert helps me to become a better teacher and get more teaching opportunities.

Study Material

Apparently, the official study material for the CompTIA Essentials exam is “CompTIA CTT+ Certified Technical Trainer All-in-One Exam Guide“. I passed the exam by reading this book and free sample questions – about 20 – on the Comptia website. Nothing else.

I bought the Kindle edition of the book from Amazon and read it on my tablet. This allowed me to study the book while I was lying down on my bed after coming home from work feeling exhausted. The hardcopy edition is heavy and your hands get tired soon. You cannot lie down and read it.

While reading the book, I highlighted the important concepts and definitions and marked important pages. But generally, I was reading it lightly and quickly. At the end of each chapter, there is a quiz. Study that thoroughly and carefully. Every time I lied down to study, I started by reviewing the highlighted parts.

I spent between 12 to 16 hours on this book. These hours were spent during about 7 sessions of studying.

This book is very repetitive and a dry read. It can be easily condensed from 400 pages to 100 pages. Some of the ‘correct’ answers to the quiz questions seem to be arbitrary. Having said that, the book contains useful advice on how to deal with problematic students and issues in the classroom.
As I said, I used the kindle version of the book, and I was happy with its quality.

During the exam

I took the test on 2019/Aug/24. The exam had familiar terminology. The exam’s questions sound like the quizzes in the official book. However, comparing with other certification exams I’ve taken (AWS Architect, C#, Unity, LabVIEW, etc), this one was the vaguest and subjective one.

Exam results

I got the result immediately after the computer-based exam was finished. I passed the exam. My score was 725 out of 900 (80%). The passing score was 655 (73%).

Note: The exam center gave me a hard copy of my transcript after the exam but I never got its digital format (I simply got an email from CompTIA saying I passed). So you must keep this piece of paper (or better scan it). You will need the details on this transcript later on).

In the next post, I’ll explain how I passed the second exam – the video submission exam (EXAM TK0-202).

How to Play SteamVR Games with Your Own Custom-Made Controller

Introduction

Assume you have made a custom controller (say with Arduino) and you want to use it (instead of HTC Vive Controller) to play SteamVR games (like Beat Saber). Here I describe a hack to achieve that. I assume your controllers send data to your PC via a USB serial port.

The entire completed project is available here.

What this article is not about

In this article, we are not going to build a custom-made controller. We assume your controller sends data to a USB port using textual serial communication. Something like the following that includes positional and rotational information of the controller:

pos: 1.00000,0.00000,1.00000 rot: 0.99985,-0.00356,-0.00462

We assume the format of the above message is like:

pos: x, y, z rot: pitch, roll, yaw

This article is not about Razer Hydra controllers. We only use its driver to communicate to SteamVR. No Razer Hydra hardware is required. You only need to have a working HTC Vive System.

Step-by-step Explanation

The following diagram shows the overall structure of the system we are going to build:

dataflow_arduino_to_steamvr

Figure 1. Dataflow from our custom controller to a SteamVR game.

Step 1: Install SteamVR Driver for Razer™ Hydra.

After installation, you should see the folder “SteamVR Driver for Razer Hydra” in this path:

C:\Program Files (x86)\Steam\steamapps\common

This folder contains the Razer Hydra Driver we talked about.

Step 2: Download and Modify FreePIE source code.

Download FreePIE source code from here.

2.1 Copy and Rename Fake DLLs (DLL Injection)

Once downloaded and unzipped, find the following files (usually in the directory “FreePIE-master\Lib\Sixense\Emulation\Binary”):

1.      sixense_fake.dll

2.      sixense_fake_x64.dll

Put these files in the followings respectively:

  1. sixense_fake.dll –> C:\Program Files (x86)\Steam\steamapps\common\SteamVR Driver for Razer Hydra\hydra\bin\Win32
  2. sixense_fake_x64.dll –> C:\Program Files (x86)\Steam\steamapps\common\SteamVR Driver for Razer Hydra\hydra\bin\Win64

You can probaly find two files called “sixense.dll” and sixense_x64.dll inside these directories respectively. Remove or rename them to something else (I would add the prefix “old_” to their names.)

Afterwards, rename the fake dlls (by removing the “_fake” from them):

  1. sixense_fake.dll –> sixense.dll
  2. sixense_fake_x64.dll –> sixense_x64.dll

Basically, we have removed the genuine dlls with fake ones. This is called DLL injection. The injected code (fake dlls) reads FreePIE data instead of a real Razer Hydra hardware. If you don’t trust these dlls and don’t want to run them on your computer, you can read the source code and build them yourself from the github page.

2.2 Modify FreePIE source code

We are going to change the source code and add a plugin to it so that it reads data from a USB port. I assume your custom device send the positioning/clicking data in string format and is connected to a USB port on your computer.

Open the solution file “FreePIE.sln” with Visual Studio. This solution consists of several projects including “FreePIE.Core.Plugins”. In the Solution Explorer (right pane), right-click on the project “FreePIE.GUI” and select “Set as Startup Project”.

vs_set_as_startup_project

Now run the solution by clicking on the green play button or go to Debug > Start Debugging (F5).

Most likely, you will get this run-time error:

“An attemp was made to load an assembly from a network location which would have caused the assembly to be sandboxed in the previous versions of .NET Framework… .”

vs-sandboxed-blocked-dlls

Why is that? Because for security reasons, Windows is not allowing the program to load some dlls (they could be viruses). But as we know they are not, we can tell it that it’s OK to load them. For doing so, hover your mouse over “dlls” variable and see its contents:

dlls-contents

Take a note of these dlls name and location. All of them reside in FreePIE-master\FreePIE.GUI\bin\Debug\plugins folder. Go there and right-click on each of them individually and choose “Properties”.

dlls-properties

If you see “Unblock” in this dialog, mark it so that your program can load it. In my case, “PPJoyWrapper.dll”, “SlimDX.dll” and “vJoyInterfaceWrap.dll” had the Unblock option (meaning that Windows has blocked them).

Now go back to your FreePIE application and run it. This time it should run without any issue. It displays this:

freepie-blank

Select File > New.  You see a new empty document ready for you. In this script, we write IronPython code. With IronPython you have access to .Net libraries but not built-in python libraries! The weird thing about the workflow of FreePIE is that it runs all the script repeatedly! Practically, it’s like the whole script is inside an infinite while loop. You cannot define new python classes and you got to use only this single script i.e. you cannot spread your application over several script. Because of these limitations, I decided to write a minimum python script that sets up the Razer Hydra controller in VR and write everything else in C# side (which I will explain later.)

Let’s get back to the FreePIE script. This script’s responsibility is to grab the data from our custom FreePIE plugin (not explained yet) and give it to Sixense DLLS/Fake Dlls. These fake dlls send data to Hydra driver which subsequently gives it to SteamVR. As mentioned before, we don’t need Razor Hydra hardware. We only use its driver to be able to communicate with SteamVR (in near future, I will write another article explaining how to develop a SteamVR driver from scratch and not to use FreePIE or fake dlls at all).

The following is the minimum code (“hydra_setup_brief.py”) required to set up the Razer Hydra controllers in SteamVR:

 

global pressed
def handleStartButton():
  global pressed
  if pressed == 0:
    if keyboard.getKeyDown(Key.Space): # For holding two system buttons (start buttons) at the same time. 
       # holding two system buttons causes the controllers to be set at the HMD position. 
      hydra[0].start = True
      hydra[1].start = True
      pressed = 1
    elif keyboard.getKeyDown(Key.Backspace): # For clicking the start button (system button) to bring up the steam menu.
      hydra[0].start = True
      pressed = 1
  elif keyboard.getKeyDown(Key.Backspace):    # For situations in which the start button (system button) is  
                                              # hold down to bring the turn off menu).
    pressed = 1
  else:
    hydra[0].start = False
    hydra[1].start = False
    pressed = 0
 
def init_hydra(index):
  if index == 0:
    hydra[index].x = 65
    hydra[index].y = -45
    hydra[index].z = -200
    hydra[index].side = 'R'
  else:
    hydra[index].x = -65
    hydra[index].y = -45
    hydra[index].z = -200
    hydra[index].side = 'L'    
  hydra[index].yaw = 0
  hydra[index].pitch = 0
  hydra[index].roll = 0
  hydra[index].start = True
  hydra[index].isDocked = False
  hydra[index].enabled = True
  hydra[index].trigger = 0
  hydra[index].three = 0
  hydra[index].four = 0
  hydra[index].one = 0
  hydra[index].two = 0
  hydra[index].bumper = 0
  hydra[index].joybutton = 0
  hydra[index].joyx = 0
  hydra[index].joyy = 0
  
def vive_controllers_init():
 init_hydra(0)
 init_hydra(1)
 
def update():
 handleStartButton()
 
if starting:
  pressed = 0
  vive_controllers_init()
 
update()

Save the script. Before going any further, let’s test it. Open Steam and run SteamVR (its icon is in the top-right corner). Run the script by selecting the command Script > Run Script. Now wear the Vive headset.

If you see the following message, press the Spacebar on the keyboard. Afterwards, you should be able to see two controllers. You may want to press Spacebar again to bring the controllers in front of you.

hydra-callibration-overlay

Figure 2. If you see this message, press the Space key on your keyboard.

Now it’s time to add a plugin to read the data from a USB port connected to your Arduino (or any other device connected to the USB port.)

1. Find the file cs (it should be in the folder FreePIE.Core.Plugins).

2. Make a copy of it and rename it to cs

3. Open MyPlugin.cs and rename all occurrences of “AhrsImu” to “MyArduino”. Also make sure you have the following:

[Global(Name = "myArduino")]

This name “myArduino” will show up in FreePIE editor when you are writing an IronPython script.

Before going any further, let’s see what is going on here. MyArduinoPlugin class is derived from ComDevicePlugin class (aka the parent class). The parent class follows a Template Method design pattern. In this pattern, the parent class is defining a skeleton of an algorithm (i.e. reading from a COM device) but delegating some of its steps to its subclasses (MyArduino). Which steps does it delegate? The abstract methods:

protected abstract void Init(SerialPort serialPort);
protected abstract void Read(SerialPort serialPort);
protected abstract string BaudRateHelpText { get; }
protected abstract int DefaultBaudRate { get; }

The Init(…) method (implemented in the child class MyArduinoPlugin) runs once and it sets up the serialPort object.

The Read(…) method runs continuously and runs every time the IronPython script runs. They both run in the same loop. So if the Read(…) method stops or gets slows, the script stops or get slow too.  There is a mechanism in FreePIE that halts when such a thing happens. In other words, it crashes. That’s why we should avoid reading COM port in this method. Reading the serial port can be time-consuming and the command serialPort.ReadLine() is a blocking command meaning that the execution of the program is blocked till we get a message from the Arduino. If the Arduino sends the data slower than what FreePIE expects (say 100 Hz), FreePIE crashes. To prevent that, we will create a new thread separate from the Read(…) thread. But before doing so, proceed to the next steps to implement this quick and easy changes.

4. In the class MyArduinoPlugin, update the two properties BaudRateHelpText and FriendlyName as desired. These will show up in the UI and help user to find the plugin.

protected override string BaudRateHelpText

{

    get { return “Baud rate, default on MyArduino should be 921600”; }

}

 

public override string FriendlyName

{

    get { return “MyArduino Plugin”; }

}

 

5. Find the property “DefaultBaudRate” and change it to the baud rate of your Arduino device. My device has the baud rate of 921600.

protected override int DefaultBaudRate
{
    get { return 921600; }
}

6. Now it’s time to setup the serialPort object and create a new thread that reads the COMP port. Go to Init() method in the MyArduinoPlugin As I said before this method runs once and set up the serial port. It also creates a new thread that read the data from the USB continuously.

protected override void Init(SerialPort serialPort)
{
    // Initialize the member variable.
    _serialPort = serialPort;
    // (optional) wait for IMU
    //Thread.Sleep(3000); 
    // Set the timeout i.e. if 2000 ms passed and no data is received,
    // throw an exception.
    _serialPort.ReadTimeout = 2000; // 2 sec 
    // (optional) Here you can send a signal to the Arduino telling it
    // to start sending data.
    // You might not need it though.
    //serialPort.WriteLine(START_COMMAND);     
    _looping = true;
    _serialPortReadingThread = new Thread(SerialPortReadingThread);
    _serialPortReadingThread.Start();
}
///
/// Reads the serial port continuously.
/// 
private void SerialPortReadingThread()
{
    while (_looping)
    {                
        try
        {
            _receivedSingleMessage = _serialPort.ReadLine();
        }
        catch (Exception e)
        {
            if (e is TimeoutException)
            {
                Console.WriteLine("TimeoutException inside MyArduinoPlugin.cs");
                return;
            }
            else
                throw;
         }
        ParseData();
    }
}
 

 

You need to define the member variables I’ve used here. I’m skipping that here because I want to keep this tutorial short and clear. As mentioned before, you can find the complete project here.

If you see the message ”TimeoutException inside MyArduinoPlugin.cs” in your console, it means FreePIE has waited 10 seconds before it timeouts. Check you Arduino to make sure it is sending data to the COM port or whether FreePIE is listening to the right COM port.

ParseData() parses the received message and extract the position and rotation of the controller (= Arduino). The body of ParseData() totally depends on the format of the message. In my case, it is as follows:

///
/// Extract positional data from the message received from the arduino.
/// 
private void ParseData()
{
    // Extract position and orientation.
    // sample received message 
    // pos: 1.00000,0.00000,1.00000 rot: 0.99985,-0.00356,-0.00462,-0.01626 
    // which follows the format --> pos: x, y, z rot: pitch, roll, yaw
    var stringCollection = _receivedSingleMessage.Replace("rot:", COMMA)
                                     .Replace("pos:", COMMA)
                                     .Split(',');
    // We omit stringCollection[0] because it does not contain positional data and is empty.
    float posX = Convert.ToSingle(stringCollection[1]);
    float posY = Convert.ToSingle(stringCollection[2]);
    float posZ = Convert.ToSingle(stringCollection[3]);
 
    float pitch = Convert.ToSingle(stringCollection[4]);
    float roll  = Convert.ToSingle(stringCollection[5]);
    float yaw   = Convert.ToSingle(stringCollection[6]);
 
    _latestPosition = new MyArduino.Vector3(posX, posY, posZ);
    _latestData = new DofData();

7. Now it is time to implement the Read(…). This method body is short and simple:

protected override void Read(SerialPort serialPort)
{
    Position = _latestPosition;
    Data = _latestData;
    Thread.Sleep(1);
}

As you see, it simply copies data to Position and Data. The Position member variable that keeps x, y and z is not defined yet. We are going to define it in the parent class ComDevicePlugin. However, Data member variable is already defined in the parent class. In the next step, we are going to modify the parent class such that it defines the missing pieces.

8. Go the cs and add 921600 is in the list if you need it (optional).

foreach (var rate in new int[] { 120024004800960014400192003840057600115200921600 })
{
    property.Choices.Add(rate.ToString(CultureInfo.InvariantCulture), rate);
}

9. In the same class, add the member variable Position:

public Vector3 Position { getprotected set; }

Vector3 class is not defined in FreePIE. I have written this class and you can find it the provided code.

10. Add a constructor to DofData struct to make it easier to work with:

public struct DofData
{
    public float Yaw;
    public float Pitch;
    public float Roll;
 
    public DofData(float pitch, float roll, float yaw)
    {
        Pitch = pitch;
        Roll = roll;
        Yaw = yaw;
    }
}

11. Expose the Position member variable to IronPython script by adding px, py and pz properties to DofGlobal class:

public abstract class DofGlobal<TPlugin> : UpdateblePluginGlobal<TPluginwhere TPlugin : ComDevicePlugin
{
    protected DofGlobal(TPlugin plugin) : base(plugin){}
 
    public float yaw
    {
        get { return plugin.Data.Yaw; }
    }
 
    public float pitch
    {
        get { return plugin.Data.Pitch; }
    }
 
    public float roll
    {
        get { return plugin.Data.Roll; }
    }
    public float px
    {
        get { return plugin.Position.x; }
    }
    public float py
    {
        get { return plugin.Position.y; }
    }
    public float pz
    {
        get { return plugin.Position.z; }
    }
}

12. Override the Stop() method as follows:

public override void Stop()
{
    // (optional) Stop arduino.
    //_serialPort.WriteLine(STOP_COMMAND);
    //Thread.Sleep(200);
    
    // Stop the reading thread and wait for it.
    _looping = false    Thread.Sleep(100);
 
    _serialPortReadingThread.Join();
    _serialPortReadingThread.Abort();
    base.Stop();
}

By overriding this method, you can safely stop the Arduino and the reading thread.

13. (Optional step) Remove ReadFloat(…) We don’t need it.

14. Run the FreePIE (GUI project). Go to File > Open… and open hydra_setup_brief.py and add the following to the update () method:

def update():

 handleStartButton()

 diagnostics.watch(myArduino.px)

 diagnostics.watch(myArduino.py)

 diagnostics.watch(myArduino.pz)

 

 diagnostics.watch(myArduino.roll)

 diagnostics.watch(myArduino.pitch)

 diagnostics.watch(myArduino.yaw)

 

 index = 0 # there are two Vive controllers. Here we choose the one with index ‘0’ to represent our arduino.

 hydra[index].x = myArduino.px

 hydra[index].y = myArduino.py

 hydra[index].z = myArduino.pz

 

 hydra[index].roll = myArduino.roll

 hydra[index].pitch = myArduino.pitch

 hydra[index].yaw = myArduino.yaw;

15. Go to  Settings > Plugins > MyArduino Plugin and select the COM port that is connected to your hardware (controller). Check the baud rate is correct and then click Ok. Make sure no other program is connected to the same port. Otherwise, your application cannot read the port.

16. Run the script (Script > Run script). In the “Watch” pane below the script, you should be able to see the values of px, py, pz, roll, pitch and yaw (because we used diagnostics.watch(…) on them). Also if everything works properly, you should be able to see the hydra controller in virtual reality is moving and rotating when you move or rotate your controller.

Note: The SteamVR coordinate system is likely different from your controller coordinate system. That causes the hydra controller moves differently from what you expect. You should find the mapping between your controller coordinate system and SteamVR’s. (SteamVR coordinate system: right-handed system, +y is up and distance unit is meter).

My thoughts on Unity Certified 3D Artist exam

I took the certification “Unity Certified 3D Artist” beta exam recently on 2018/April/28. I had 120 minutes to answer 92 questions.

My background is programming. I had no special art training but could make basic 3d models in Blender, import them into Unity and create a basic scene with various light source types. I had made several 2D game prototypes at the time, but nothing complicated in the artistic side of the work. I hold the certification “Unity Certified Developer” (Certified on April of 2016). I had 3 years of experience with Unity and had been a VR Game Developer as my daytime job for two years.

The exam was difficult. It was perfectly tailored for 3D artists and the questions were heavily scenario-based. I had to carefully measure various factors (such as CPU/GPU usage vs. memory usage, quality vs. frame rate) and make a reasonable tradeoff based on the given scenario. For example, there was a question about the correct usage of realtime and baked lighting for a downtown area (at night) with walking pedestrians and light poles. The target platform was low-end smartphones with limited memory. Having the knowledge, you could determine the right answer.

Some questions had photos with them and you had to consider the image to be able to choose the right answer. As an example, there was an image showing a realistic sunrise in which the upper half was reddish, and the bottom half was bluish. You had to choose the answer that could reproduce such a setting. As a programmer, I always wanted to learn these techniques that artists use to achieve these beautiful sceneries. I remember there was a question about how to efficiently reproduce lights going through a colorful glass of medieval church. Again, the answer was not obvious. The given choices were usually multiple lines and similar to each other. Some wrong answers were right if the target platform was different. So, you had to pay attention to the details in the question.

I have one major complain about some questions. They described a complex scene without showing any illustrations. You had to imagine a complicated geometrical scenery by reading 4 lines of texts. Then you were asked to do something based on this geometry. If you could not imagine that correctly, you were screwed. Sometimes you could imagine different shapes based on the same description. My advice to Unity exam authors is to provide a picture instead of describing it.

In some scenarios, you were encountering an issue (such as wrong lighting on a mesh or flickering areas when the camera is moving) and you had to figure out what could be wrong and how to resolve it. To be able to answer them, you had to know various import options (like “Optimize Mesh”, “Keep Quads”, “Weld Vertices”, “Use File Scales” and so on). In my opinion, without having a practical experience in the past, you could not figure out what is going wrong.

Couple questions were about scaling issues. For example, there was a scenario similar to this (some details removed or changed to not to breach the NDA):

Your team has made a VR game in which the player experiences an indoor area from a dog perspective. The furniture is scaled 2 times bigger than their actual size to give the player the impression of looking through dog eyes. Now they have decided to add an option for players to play the game as human (so the player can choose to play as a human or a dog) and they want the size of the furniture to look right. As an artist, how can you achieve that?

You were given different choices that suggest changing the scale of an object by half or changing VRSettings.renderScale, etc. …. Another question asked me to calculate the correct pixel per unit value if I wanted to have a sprite imported with a specific width.

These scenario-based questions spanned from architectural details of furniture (LODs, detailed masks…) to filmmaking workflow (Cinemachine, various camera movements and post-processing effects vs using shaders in offline or real-time scenarios) to real-time visualizations of shiny brand-new cars in VR or augmented reality. I remember several questions on how to use curves (Animation curves, easing-in/ easing-out transitions, Blend Curves, Speed Multiplier, etc.) in mecanim to demonstrate a powerful fist thrust in a cut-scenes of a fighting game. Key-framing-, Dope Sheet-related questions were all there.

There were few questions (about 3 or 4) about “Collab” (a version control system made by Unity) and even fewer questions about audio (which I don’t remember). The “Collab” questions were about the current status of your local files (whether they are up to date or behind the latest version or have a conflict with the main repository).

Couple questions were about physics (things like OnCollisionEnter vs. OnTriggerEnter). One question asked the best approach for opening a door when the user interacts with it. It asked about where to put a script that used the Collider component (on the door frame, door or doorknob). You were given a screenshot of their gameobject hierarchy. This was the only programming-related question I saw on this exam.

I will get my exam result in 4 months. (Update: My score was 395, way below the passing score of 500. I failed but I will retry the exam after Unity provides an Official Course for it.)

In conclusion, I highly recommend studying for this certification (even if you are a programmer) because you’ll gain lots of valuable knowledge. I wish there was a course that taught all the exam materials (Update: Unity has started to offer a Coursera Specialization for this exam called “Unity Certified 3D Artist Specialization“.)