X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Tech Challenge for 2017: Testing IoT Applications

If 2016 was the year of the emergence of the Internet of Things (IoT), 2017 will no doubt be the year it becomes mainstream. It seems as though new initiatives are being launched almost daily and testing IoT applications is now a priority.

IoT applications certainly present new technical challenges to the software architects and engineers who build them – not only because they introduce new tools and platforms, but more importantly, because IoT applications in many ways represent a brand new application paradigm. We’re doing new and exciting things – like gathering massive amounts of data and reacting to it in real time – that many haven’t done before.

This same paradigm shift will present a challenge to software testers as well. Whether you’re testing functionality, stability or performance, the way you’ll go about testing on an IoT project is likely to be significantly different than the way you’d test a more traditional enterprise or web application.

There are too many things to talk about when it comes to testing and IoT to cover in a single blog post – so this will be a series of posts, each of which looks at a particular aspect of testing in the realm of IoT. This first post will take a look at the fact that the most important actors in an IoT application are machines – the sensors that generate the data we gather and analyze. This fact alone makes feature testing in IoT a very different thing than feature testing in more traditional applications built around human users.

Machines as Actors

One of the first mental adjustments IoT teams need to make is to recognize the primary actor is not a human user, but rather a vast network of devices, silently gathering data about something someone cares about.

When testing a traditional application – let’s use a Web app as an example – test cases tend to start and finish with human action:

  • User selects a product
  • User puts product in basked and adjusts quantity
  • User enters ship-to location
  • User clicks Save
  • System calculates total price and estimated shipping costs (things our test case can verify)

In contrast, consider a hypothetical IoT test scenario:

  • Location sensor attached to a truck passes a “hot zone” in Chicago and records it was near that point
  • Location sensor contacts a gateway and passes along its saved data, which is forwarded by the gateway to a data gathering endpoint
  • Data gathering endpoint records the messages, then analyzes them for any important changes
  • System detects that the truck just registered in the Chicago hot zone was supposed to be in Detroit at this time
  • System sends a notification to a truck dispatcher, alerting him there’s a truck currently in an unexpected location

There are several key differences between these two scenarios. The most fundamental difference, and one that is of great importance to testers, is one of perspective.

The traditional Web test case was fully human-centric. A human started the transaction, modified some data, and completed the transaction. This type of transaction is relatively easy to handle from a testing perspective, whether you are doing manual testing or building an automated test. Assuming the role of a user, you initiate a transaction, enter some known data values, and then assert that specific, system-generated values (e.g. price and shipping costs) were computed properly.

In the IoT scenario, humans are fully passive and, interestingly, optional. A device sent a message that an algorithm determined was important, which triggered a new message to be sent to yet another device. This raises some important questions:

  • How do you, as a tester, initiate this test?
  • How do you know where to look to see if the requirement (e.g. notification message sent) was met?

Let’s consider each of these questions in turn.

Initiating an IoT Test Case

You’ll quickly discover, when testing in the IoT space, that you need a way to send controlled messages. You won’t have user-friendly web pages where you can key in data values to produce a specific outcome – you need to generate a controlled test message that looks to the system like it came from a sensor, which has specific data values in it.

That sounds like fun.

This means, in order to test an IoT system, you’re going to need tools. Even if you’re doing manual testing, you need a way to communicate with the system in a very specific way – one that doesn’t include a traditional user interface.

We’re not talking a generic piece of software you can find on Google and install in a browser. There hasn’t been much standardization in the IoT space when it comes to specific message formats. This means your development team is going to have to build an emulator – a software utility that can create messages that look to the system just like messages it receives from real sensors.

Your development team will need this emulator anyway, to do its own testing, and to diagnose problems. So someone on the team will inevitably need to build one. (Our strong advice to IoT teams is to make this one of the top-priority tasks in the development backlog – the sooner you have one available, the sooner it’ll start paying dividends.)

The emulators we’ve built have the following general capabilities:

  • They can accept user input when executed, in order to inject specific data values into messages (for executing functional test scenarios like the example we’ve been talking about).
  • They can generate huge numbers of messages in relatively short time spans, so we can use the emulator for
    stress / capacity testing.

Given a suitable emulator provided by your development team, you will be able to send a message with specific information in it, to “trick” the system into believing that a truck just rolled past a hot zone in Chicago.

Checking an Expected Result

Okay, so we were able to send that message. The conclusion of a successful test means verifying that another message (let’s say a push notification to an iPhone) was sent.

Once again, this will be more challenging than testing a simple web application, because the expected outcome won’t be just sitting there in the same browser window where you entered your test data. You’ll have to go hunting for it. More specifically, you’ll have to tell it where to go, and be watching for it when it gets there.

In an IoT app that includes an alerting feature, there will undoubtedly be some sort of routing or subscription feature that tells the system where to send alerts (or which allows users to subscribe to them). So you’ll have to do all of this setup work ahead of time. You’ll probably have to put an email address or phone number – associated with a device you’re using for testing – into a subscription database. Often times there’ll be an admin user interface where you can do that.

So, assuming you had taken care of this setup work and the system knew to send alerts your way, when you drop the message into the IoT system using that emulator… bam! You should get your alert.

Well, you’ll get it eventually. That’s the other thing to be aware of with IoT system. They’re what your development folks would call “massively asynchronous.” Meaning, in plain English, that things are done in fits and starts – not in one long, orderly and continuous process.

Under Pressure

In order for an IoT system to perform well when thousands of messages are pouring in, they have to be resilient under pressure. This means that incoming messages get tucked away safely when they are received – we don’t want to lose them – but that they may take some time to be processed. Just how long they take depends on many factors – how many other messages were pouring in at the same time, how powerful the hardware (or virtual machines in the cloud) are, etc.

Once your message does get picked up and processed, and triggers the alert you’ll be waiting for, the dispatching of that message will also be done asynchronously – when the system can get to it – introducing another potential opportunity for delay (in developer-speak, “latency”).

So, patience is a virtue here. There may not be a lot of consistency between test runs in the amount of time it takes an alert to be received. Eventually, after you’ve been testing a system for a while, you’ll get a sense of how long “normal” is for your system, and you’ll know when to stop waiting for a missing alert and start writing up a defect report.

Up Next

In the next entry in this series, we’ll talk about automated testing in this tricky asynchronous, latency-filled world.