Announcing the first official OpenZWave library for UWP

As a follow-up to my recent OpenZWave blogpost ( /post/2017/01/20/Using-OpenZWave-in-UWP-apps ), a few things has happened since.

First of all I’ve worked closely with the OpenZWave team, and we agreed to consolidate efforts. My library is now the official OpenZWave library for .NET and UWP, and has been moved out under the OpenZWave organization on GitHub:

       https://github.com/OpenZWave/openzwave-dotnet-uwp

At the same time the older .NET library has been removed from the main OpenZWave repository, so they can focus on the native parts of the library, and I’ve taken over the .NET effort.

To successfully support UWP and .NET, I wanted to achieve as close code compatibility as possible, and for maintainability also reuse as much C++ code as possible (both libraries are written in C++ – C++/CX for UWP and C++/CLI for .NET). The APIs for the two binaries should however be completely identical and the code you write against them the same. This meant a lot of refactoring, and breaking the original .NET a little. At the same time, I did a full API review, and cleaned it up to better follow the .NET naming guidelines. The overall API design hasn’t changed too much, and moving from the older .NET API shouldn’t be too much work (the original WinForms sample app was ported over with relatively little effort and is available in the repo as a reference as well).

However because of the many small breaking changes, the nuget package needs a major version increase.  I’ve just released the v2.0.0-beta1 package for you to start using. The API is release quality though, and should be very close to a final release. If you’ve done any OpenZWave dev, I encourage you to try it out and provide feedback.

Read the WIKI to see how to get started: https://github.com/OpenZWave/openzwave-dotnet-uwp/wiki

Or try out the sample applications included in the repository. So go grab the nuget package today and start Z-Waving!

                https://www.nuget.org/packages/OpenZWave/2.0.0-beta1

Note: If you are using IoTCore, beware that Microsoft pre-installs an OpenZWave to AllJoyn bridge. This bridge will grab your Serial Port, so make sure you disable this app prior to using the library. Second: The built in AllJoyn-ZWave service by Microsoft only supports the older Gen2 AeoTec adapter, whereas this library also works great with the Gen5 models.

Using OpenZWave in UWP apps

In my recent IoTivity hacking, I wanted to create a bridge between ZWave and IoTivity, and run it as a StartUp task on my Raspberry.

Something similar already exists in Windows IoT Core, as a bridge between ZWave and AllJoyn. Actually all you have to do is get a Generation 2 Aeotec ZWave ZStick, plug it into your device running IoT COre and you got yourself a ZWave-to-AllJoyn bridge. Unfortunately those aren’t being sold any longer, only the Generation 5, which isn’t compatible with the bridge. AllJoyn isn’t doing too well either.

Anyway, back to IoTivity: To build a bridge, I needed a ZWave library that supports UWP. After all, most of my devices are ZWave devices. I have my SmartThings hub as a primary controller, but you can add any number of ZWave USB Sticks as secondary controllers to the ZWave network. So I can continue to rely on SmartThings (for now), while I start hacking with the USB controller against the same devices.

Luckily Donald Hanson has an awesome pull-request for OpenZWave that adds a native UWP wrapper around OpenZWave, based on the .NET CLI wrapper. However the OpenZWave people were a little reluctant to merge it, as they already have a hard time maintaining the .NET CLI one, and suggested someone taking it over. I offered to do this but haven’t heard anything back from them. So while waiting, I just started a new repo to get going with Donalds blessing. I’ve spent a lot of time cleaning up the code, as there was a lot of odd patterns in the old .NET library that created an odd .NET API to work with (for example there was Get* methods instead of properties, delegates instead of events, static types that aren’t static etc). I’m also working on bringing the .NET and WinRT core in sync, so the two could share the same source-code. I’m not there yet, but it is getting close. If you have some C++ experience, I could really use some help with the abstraction bits to make this simpler.

Bottom-line is I now have a functioning wrapper for OpenZWave that can be used for .NET and UWP, and it works with the new Gen5 ZStick! (and many others) There are many breaking changes though, so I don’t know if OpenZWave wants to bring this into their fold. If not, I’ll keep hacking away at it myself. I do expect to continue making a lot of breaking changes to simplify its use and make it more intuitive. Due to the nature of ZWave devices, you can’t always rely on an instant response from a device when it is trying to save battery, so it could be several minutes before you get a response (or never), so a simple async/await model doesn’t work that well.

Anyway go grab the source-code (make sure you get the submodule too), and try it out: https://github.com/dotMorten/openzwave-dotnet-uwp

Here’s how you start it up:

//Initialize
ZWMOptions.Instance.Initialize(); //Configure default options
ZWMOptions.Instance.Lock();       //Options must be locked before using
ZWManager.Instance.Initialize();  //Start up the manager
ZWManager.Instance.OnNotification += OnNodeNotification; //Start listening for node events

//Hook up the serial port
var serialPortSelector = Windows.Devices.SerialCommunication.SerialDevice.GetDeviceSelector();
var devices = await DeviceInformation.FindAllAsync(serialPortSelector);
var serialPort = devices.First().Id; //Adjust to pick the right port for your usb stick
ZWManager.Instance.AddDriver(serialPort); //Add the serial port (you can have multiple!)

 

The rest is in the Notification handler. Every time a node is found, changed, remove etc. an event is reported here, including responses to commands you send. Nodes are identified by the HomeID (one per usb controller), and by the NodeID. You use these two values to uniquely identify a node on your network, and can then oerform operations like changing properties via the ZWManager instance.

There’s a generic sample app you can use to find, interrogate and interact with the devices, or just learn from. Longer-term I’d like to build a simpler API on top of this to work with the devices. The Main ViewModel in the sample-app is in a way the beginnings of this.

And by all means, submit some pull requests!

 

IoTivity.NET – A Crossplatform .NET Wrapper

Since I’m a .NET developer who can’t be bothered with spending too much time in C++, what am I to do if I want to use the IoTivity library to expose and interact with devices on the OCF protocols? The SDKs provided are either C++, C, Java (Android) and Object-C (iOS). .NET is sorely missing (hint hint OCF!).

Well the key here is the C SDK. It provides the necessary export of methods to import them into C# using p/invoke. Even better this approach works in .NET, UWP, but even Xamarin Android, and Xamarin iOS (and probably more). So why not create a set of C# classes in a .NET Standard library that imports the methods, to call into the native library?

Yeah I couldn’t come up with a reason why not, so I did: https://github.com/dotMorten/IotivityDotNet

This is however a work in progress, but it already allows you to create, discover and interact with devices over the IoTivity protocol. Compared to the AllJoyn APIs, the IoTivity API is A LOT simpler. In fact you can create a device in a single line of code.

To use, first initialize the Iotivity service:

 IotivityNet.Service.Initialize(IotivityNet.ServiceMode.Client);

You can then search for device resources using Device Discovery:

var svc = new IotivityDotNet.DiscoverResource("/oic/res");
svc.ResourceDiscovered += (s, e) =>
{
    Console.WriteLine($"Device Discovered @ {e.Response.DeviceAddress}");
};
svc.Start();

When you're done, shut down the Iotivity service:

await IotivityNet.Service.Shutdown();

Look at the sample apps for examples how to create devices to discover and respond to requests from clients.

 

Note 1: Currently the SDK is building off a fork of mine that has a set of bug-fixes for IoTivity, rather than using the official builds. All of these bugs are logged and waiting to be fixed, but the IoTivity maintainers have been pretty responsive to this, and one has already been fixed recently.

 

Note 2: Xamarin doesn’t work at this point. I have not been able to compile the native libraries for these platforms. However it should “just work” once that’s done. If you’re interested and have experience building native iOS and Android libraries, I would love a hand with it.

Is AllJoyn dead?

I’ve spent a lot of time the past two years building various AllJoyn related libraries and services. In fact every single smart-device in my home is accessible via AllJoyn. Most of that work is available as a set of repos on GitHub as well. However did I waste my time doing this? (apart from the fun it is). Here’s my take on all of this. It’s written from my personal point of view focused on building a smart-home. I might be wrong, or my arguments doesn’t apply in other scenarios. But please do use the comment section to add additional viewpoints.

 

First of all what is AllJoyn? AllJoyn is a protocol and a set of standard interfaces to interact with your smart devices on the local network. It is (was?) meant to be the standard that unifies all the other IoT standards, either by directly implementing AllJoyn in the device, or providing Device Service Bridges (DSB) to translate a protocol to AllJoyn. I created several DSBs to bridge my range of devices into one common set of AllJoyn interfaces (LIFX, Philips Hue, EcoBee, ZWave, ZigBee HA, ZigBee SE, etc). This was neat because I for instance didn’t have to care what type of light I was controlling. The code to discover, flip the switch, dim or change color would be the same. And if a new device/protocol came out tomorrow, my apps didn’t have to change, as long as a bridge was created.

However AllJoyn isn’t the only standard trying to do this, Open Connectivity Foundation started by Intel had similar goals, albeit some years behind on the implementation. It started to look like the Betamax/VHS wars all over, but luckily they decided to join forces and merge into one organization.

However it looks as if the direction going forward is betting on OCF’s protocol and it’s “Iotivity” open-source reference implementation, rather than the (slightly) more established AllJoyn protocol. So does that mean AllJoyn is dead? It’s a question people are very reluctant to give you a clear answer to. Here’s why I think that is:

1. AllJoyn already ships as part of Windows 10. Microsoft are pretty good at supporting their products and features for several years.

2. AllJoyn is still seeing development (regular commits are still going).

3. OCF will provide an AllJoyn to OCF bridge, and bring the two standards closer, taking the best of both.

 

Now I don’t have any knowledge what’s really happening with AllJoyn and it’s future, but here are my thoughts from an outside perspective:

Microsoft will often bring up #1 when asked about the future of AllJoyn. However I don’t read much into this. Their support lifecycle makes them required to support it for a while. It’s just part of their support life-cycle. However I find it doubtful that we’ll see Microsoft pushing forward with more AllJoyn support beyond what’s already there. They might update the binary with the latest versions, but I doubt we’ll see anything beyond that. Judging from Microsoft’s AllJoyn repos on GitHub, they are very clear they don’t have the resources to maintain/improve them. For instance see this comment. Judging from the IoTivity committers and its developer mailinglist, it’s the same people from Microsoft who also contributed to AllJoyn, so it’s fair to assume resources has been moved away from AllJoyn towards IoTivity.

OCF are saying they’ll provide a bridge, and it’s on the timeline for v2.0 due in April. However looking at how different the specs are and how differently devices are exposed, I’m guessing the bridge will be exposing AllJoyn devices to IoTivity in such a generic way it becomes virtually use-less (but would love to be proved wrong). Yes there might be a boolean exposed that flips the switch on a light, but if the object isn’t actually modeled as a device that looks like any other “proper” IoTivity light, but just a set of generic properties, it doesn’t really help us to create a great user-interface for controlling all our devices. We’ll have to see, but the bridge feels and smells like a temporary band-aid until everyone is moved over. And let’s be honest… there really aren’t that many AllJoyn devices to bridge.

I’ve also seen the merger to cause a little bit of a split within the AllSeen Alliance members. Some people do not like the OCF licensing and find it too restrictive. Other people find the merger more important, since it brings a lot of big players on board, like Samsung and Intel. This split doesn’t bode well for AllJoyn, and the concerns doesn’t really affect you and I who just want to talk to these devices with out apps, bots, dashboards, etc.

AllJoyn might be more mature, but the only wide-ranging consumer product that shipped with AllJoyn was the LIFX light bulb, and LIFX aren’t even betting much more on AllJoyn1. Yes there are more products, like various music players, humidifiers etc (and all with a horrible ecosystem to control them), but you’ll be hard pressed finding any other Alljoyn products selilng in significant numbers, and none of them implement a standard set of useable interfaces. The AllSeen Alliance likes to bring up that they have over 325 millions devices that support AllJoyn, but they are including Windows 10, which while having an AllJoyn routing service installed, really shouldn’t be counted as an AllJoyn device. The fact is AllJoyn sadly never really caught on.

 

So based on this, is AllJoyn dead and should you switch to bet on IoTivity: YES! (sorry guys, but someone had to say it).

Will IoTivity be a waste of time and die off soon: I seriously doubt it. All the right players are part of this, and I don’t really see any other serious contender trying to solve this problem.

If you already have AllJoyn products shipped, you should be ok for a bit. However creating a new product today based on AllJoyn seems really senseless. Personally I’ve stopped all my AllJoyn development, and starting building a .NET Standard wrapper around the IoTivity APIs (help wanted!). Unless you control your entire internal eco-system and use AllJoyn to communicate between them (it’s really great/simple for this) consider your AllJoyn work a sunk cost and just move on already.

 

Disagree or have additional insights to share? Please write in the comment section.

 

1) Quote from LIFX VP: “We're dependent on chipset providers for our AJ [Alljoyn] support, and their focus has somewhat shifted elsewhere of late, as the market evolves and consolidates.”. Also long-time reported AllJoyn bugs aren’t getting fixed by LIFX, indicating that they have no commitment to AllJoyn.

IoT Series

Yes yes yes, I know… this blog has been pretty quiet lately. Sorry, I’m trying to fix this. I’ve been busy with releasing a very large awesome .NET product, spending some time on the UWP Community Toolkit, and hacking away at various IoT projects, and most importantly spending time with my family. My IoT-related GitHub repos are getting a little out of hand, but I’m having a lot of fun creating all the building blocks for a smart-home. My ultimate goal is to build a system that can interact with any protocol, and is resilient against internet-outages (your light switch should still work even if the internet is down – yes I’m looking at you SmartThings!).

Regarding my IoT projects, I’d like to start writing down my progress, make some notes, ramblings etc. It’s a great way to get feedback, share your work, gather your own thoughts, and having to explain something helps myself really understanding the topic etc. So I’d like to start my blog back up with various IoT topics. For those who have been following me on twitter knows I’ve been hacking away at AllJoyn, ZigBee, ZWave, IoTivity etc. I’m going to start writing some blogposts gathering my experiences, thoughts etc.

While this blog might have been a little quiet, I have actually been writing some articles on Hackster.io, so I’ll start with a few links to those, before diving into a series of IoT related blogposts.

Stay tuned….

Adding a Gaze Cursor to your HoloLens App

Adding a gace cursor to your app is important to give the user feedback what you’re looking at and whether you can interact with it.

First make sure you installed the HoloToolkit into your project.

Add new empty GameObject and rename it to Managers

Select the “Managers” object and in the Inspector click “Add Component” and add the “Gaze Manager” script.

image

In the added component’s “Raycast Layer Mast” dropdown, unselect “TransparentFX”

image

From HoloToolkit\Prefabs\Input\ add “Cursor” object to Managers object.

image

Save the scene, build the app and deploy. You now have a cursor at the center of your view following holograms and the spatially mapped mesh.

Rendering the Spatial Mapping Mesh

A lot of the HoloLens apps will sometimes render the mesh it scans to show you which surfaces it has detected – it can give a really cool effect to understand the play space.

Also if you’re using the Emulator, you won’t actually be able to see the virtual room.you’re placing holograms in, so being able to render the spatially mapped mesh can be useful for building apps if you’re not among the lucky few who has an actual HoloLens yet.

So let’s use the spatial mapping mesh we get from the HoloLens sensors and render it inside the application.

First make sure you installed the HoloToolkit into your project.

From HoloToolkit\Prefabs\SpatialMapping drag the SpatialMapping prefab into the root of your Hierarchy.

image

Select the added “SpatialMapping” object, and ensure “Draw Visual Meshes” is checked on. The default material is the “Wireframe”. Feel free to experiment with other materials.

image

Build your app and deploy it. You should now see the mesh rendered on top of walls, floors etc.

Untitled

Using HoloLens’ Spatial Mapping to occlude objects

Spatial Mapping is probably one of the most important aspects of the HoloLens. It’s what it uses to know where it is in a room and how you can make holograms interact with the real world. It essentially scans your surroundings and builds a 3D model. Here’s an example of some of the mesh it has generated for a house:

image

Now we can use this to avoid being able to see our holograms “through walls”. If we bring up the sample we build in the first blogpost, here’s what happens when the holograms goes behind a wall and ruins the illusion:

Nonoccluded2

 

First enable SpatialPerception capability. In Unity go to Edit –> Project Settings –> Player, click the “Windows Store” tab, and check off the capability:

image

Note: If you already have generated a Visual Studio project, this checkbox doesn’t actually “work”. You can do two things: Either delete the generated app and regenerate it again, or go into Visual Studio, and manually edit the “Package.appxmanifest” in a text editor (the manifest designer doesn’t have this capability listed yet) by adding the following line to the manifest:

    <uap2:Capability Name="spatialPerception" />

image

Next we’ll add the SpatialMapping component from the HoloToolkit. Make sure you’ve followed the steps from the previous blogpost to add the HoloToolkit to your project.

Select the object or collection you want to have spatial mapping occlude. In this case I’ll select the HologramCollection, so all children of this collection will get occluded. Click “Add Component” in the inspector and add the “Spatial Mapping Renderer”.

 

 

dimage

image

That’s it! Now save the scene, build your project again, and redeploy from Visual Studio.

Occluded2

The clipping is a little bit off – this is because the mixed reality capture webcam is “off” compared to what you really see in the HoloLens and the mesh scan wasn’t too good on this corner.

Installing HoloLens HoloToolkit into your Unity Project

Make sure you first read “Creating your very first holographic app in Unity” for setting up your hololens project.

The hololens team has created a useful “HoloToolkit” for use with Unity. It provides stuff like Spatial Mapping, client/service for sharing holograms among multiple users, cursors, gesture handling, spatial sound etc.

It’s pretty simple to install in to your project, so here’s the simple step-by-step:

  1. Go to https://github.com/microsoft/HoloToolkit-Unity and click “Download Zip” to download the toolkit.
  2. Right-click the downloaded zip, select properties, Check the “Unblock” checkbox and click OK.
  3. Unzip the folder “HoloToolkit-Unity-master\Assets” into your Assets folder in your Unity project.

Done!

You should now see all the HoloToolkit in your Project view (Unity doesn’t even need to restart, but will auto-detect the new files and import them).

image

I’ll be blogging about using the Toolkit in upcoming blogposts as I figure out how to use the pieces.

Creating your very first holographic app in Unity

Most of the tutorials at the Holographic Academy starts out with a starter-project with a bunch of stuff already set up for you. If you’re new to Unity and/or holographic development, I found that a little bit “cheating” and wanted to know how to do things “from scratch”, to property understand it. I thought I would share my findings in a set of blogposts – they wiill serve as notes for myself, but figured it might be useful for others as well. If something is wrong or you know a better way, please comment in the comment section.

I have all the steps recording in a video at the bottom, but for those who like to read and understand the steps, I’ll go with that first. So lets get started.

First launch Unity and create a new project. Name it whatever you’d like.

After launch, you’ll see in the Hierarchy view a “Main Camera” and a “Directional Light” object.

First we’ll configure the camera for Unity. Keep the name “Main Camera”. From my understanding this is what automatically becomes the camera controlled by your HoloLens. But we have to configure it to be placed at the center of the world.

Select the camera in the hierarchy and In the inspector set the position and rotation to all zeros:

image

Next we need to set the camera to render “nothing” as black. By default it renders blue skies, but since “black” renders as transparent and we want to see the real world around our holograms, we set “Clear Flags” to “Solid Color” and “Background” to “Black”:

image

It is also recommended to set the Near Clipping Plane to 0.85 m. This prevents users from getting “too close” to holograms and get all cross eyed from it. It can be very uncomfortable for people, but feel free to set it to 0.1, to get really up close to your holograms.

 

Next we need to configure the app for Virtual Reality. Go to Edit –> Project Settings –> Player. Click the green “Store Logo” tag, expand “Options” and check off “Virtual Reality Supported”. You should then see “Windows Holographic” listed under the “Virtual Reality SDKs” list.

image

Lastly, we configure the app to run with the fastest rendering possible. To go Edit –> Project Settings –> Quality. Under the green “Store Logo” tag to the left of “Default” click the little black dropdown triangle (highlighted with the red arrow blow) and select “Fastest”. You should see “Fastest” now be green in the first line under the store logo.

image

 

Now at this point we’ve done all the steps for configuring your Holographic app. To recap:

  1. Place camera at 0,0,0, name it “Main Camera” and set the background to solid color black.
  2. Enable the app for Virtual Realtiy
  3. Set quality settings to “Fastest”

At this point we can save the app, and build a Windows Store Visual Studio project to deploy to the HoloLens, but since we haven’t added anything to the scene, it would be a boring app, so let’s do that first before creating the visual studio project.

Right-click inside the Hierarchy Panel, and select “Create Empty”. You should see a “GameObject” be created. Right-click it at click “Rename”, and name it something like “HologramCollection”. Double-check the transform settings and sure position is placed at 0,0,0.

Next select the HologramCollection object, and right-click it. Select “3D Object –. Cube”. A new cube should be created under the collection, and the Hierarchy panel should look like this:

image

Select the Cube, and in the position, set it to 0, –0.5, 2. This means “place it 0.5 meters below the camera, and two meters in front. When the app starts, this is where the hologram will be placed relative to the HoloLens. The cube is 1x1x1 meter. That’s a little bit large for a Hologram, so set the scale to 0.25 for all 3 values, to make it .25m on each side.

image

That’s it for setting up our scene. Next lets create and build it. First go to “File –> Save Scene as…” and give it a name like “Main Scene”.

Next go to “File –> Build settings…”. For platform select “Windows Store”. SDK to “Windows 10”, UWP Build Type to “D3D” and click “Build”.

Lastly click “Add open scenes” and ensure the scene you just saved got added to the list at the top.

image

You’ll be asked for a folder to create it in. Create a folder in your project called “App”, and select the folder. The the project is done being created (it takes a while the first time), go into the app folder and open the Visual Studio solution.

Next, set the build architecture to x86, and select either the holographic emulator, or if you have a device select the “Remote Machine” and enter the IP (Tip: From within the hololens open the start menu and ask Cortana “What is my IP”), or plug the device in with USB and select “Device”.

Hiit F5 and start your first holographic app!

All the steps are also shown in the video below. At the end of the video you’ll see the app running inside the HoloLens.

 

Enjoy!