Testing Amazon Alexa skills with captured fixtures

Mike Hoolehan
5 min readJan 26, 2018

--

Hello, my name is Mike, and I developed my first Alexa skill without writing any test cases. With that confession out of the way, I will defend my developer honor to some degree: I did perform testing, but it was a clumsy affair using Amazon’s testing support from withing the Lambda management console. While those tools are good, this process greatly slowed my development cycle, as running tests meant repeatedly deploying my Lambda skill and manually executing test cases.

After my first skill, Estonia One Hundred (source), was certified, I realized I needed a better solution for testing.

Estonia’s 100th birthday Alexa skill: needs some testing!

There are several testing strategies suggested by Amazon and several articles on Medium and elsewhere on the Internet. However, none met my needs exactly. I wanted a) tests to be executed locally and b) test cases built/adapted quickly.

I describe my solution below. First I’ll describe how to obtain suitable test fixtures using Amazon tools, then how to utilize these fixtures in test cases.

Note: I refer to Alexa throughout the article as if she were a sentient actor in the computational processes discussed. This is just a convenient shorthand. I’m well aware she, I mean, it, is a software construct and I definitely don’t wish her good morning or ask her if she missed me while I was at work.

Pre-requisites

This article assumes your skill was developed with similar setup as mine: specifically a Node.js implementation using the Alexa SDK v1.0+. While my skill utilizes an endpoint served by Amazon Lambda, I believe the strategy described should work identically if served elsewhere.

Obtaining text fixtures

Interaction with an Alexa skill endpoint is stateless, which is why Lambda is such a suitable home for these endpoints. With each pertinent interaction with a user, Alexa sends JSON containing both session and request data (plus a few other details) to the skill endpoint. This simple structure makes static test fixtures very effective. We can model any program state and action with a single JSON structure.

But, hand coding each JSON fixture would be time consuming. Luckily, Amazon provides an easy way to capture Service Request JSON which can quickly be converted to fixtures. From the Test tab of your Development Skill, the Service Simulator can be used to perform interactions with an already deployed skill. This simulator is session-friendly, in the sense that multiple utterances can be provided and they will be linked into the same session. Simply enter your utterances so as to put your skill in its proper state (more accurately, to set session attributes to their proper value), and copy and paste the displayed Service Request JSON. You’ve just captured a test fixture.

The Alexa Skills Kit CLI can also be used to generate Service Request JSON for test fixtures. Using the ask simulate command will send a Service Request to a deployed endpoint. The original Service Request is returned in the debugging information printed after the request. This tool is especially useful for generating fixtures for launch intents and single interaction phrases, neither of which the web-based Service Simulator is capable of doing.

Test first?

There’s a bit of a chicken and egg situation when generating test fixtures this way. A version of the skill endpoint sufficiently close to generate a desired response must already be deployed. This could be seen as violating a test-first policy. A developer deeply concerned about this can always craft text fixtures manually. However, in practice, deploying a simple skeletal skill that includes key states and intents can be used to generate a fixture template. Once produced, subsequent test fixtures can be derived incrementally from this template (as described in the final section of this article).

Representing fixtures

Once you’ve obtained fixture, there are many ways you might wish to organize them in your codebase. I first chose a simple strategy of defining a module for each state I wish to test. An excerpt:

As you can see, I trimmed some non-essential parts of the JSON Service Request and also externalized some characteristics, such as APP_ID and USER_ID. Neither steps are necessary, but I felt these improved conciseness and readability.

Testing with fixtures

I utilized a mocha/chai combination for testing. There are plenty of guides to setting this up for a Node.js project, so instead I’ll focus instead on the test cases themselves.

The basic strategy is to load one or more test fixtures, execute our skill handler with a fixture, then asynchronously verify desired aspects of the response.

The asynchronous skill handler function we will test has signature function (event, context, callback). The context parameter is a structure that defines succeed and fail handlers. By building a custom context structure containing our test assertions, we can define custom expectations for each execution of our handler.

First, we build a helper function that will take a test function and return a context structure that utilizes this test callback. We also include mocha’s done callback, so that we can notify mocha that our test has completed, whether successful or not.

We can see above that if our handler succeeds, the handler’s response will be verified against our given test function. We then notify mocha that our test has completed. In the case our handler fails outright, we call mocha’s done callback immediately, passing our error message.

With our helper in place, our test cases need only call our skill handler with a fixture and a test context constructed using the above helper:

Here we test that two session attributes are set to expected values, and that the response speech contains expected SSML markup. Already our tests are looking fairly concise and readable.

Dynamic fixtures

One shortcoming of the above test cases is that the test is not fully descriptive. The test specifies a fixture of REPEAT_PHRASE_ZERO_AFTER_FIRST_PHRASE, but the reader must either take it for granted that the test is set up as expected or head off to another file to inspect. It would be much more descriptive if we could specify the salient aspects of the test fixture directly in the test case.

Luckily, since most of our Service Requests will be very similar and consist of much boilerplate code, we can easily create a generalized Service Request fixture factory:

Then, using this factory, we can rewrite our test above, specifying only our current state attributes and new intent, as:

Now the cases are much more readable. Additionally, it becomes very easy to derive additional test cases without the need to simulate interactions.

Conclusion

The strategy described above allows Amazon Alexa skills to be tested locally with easily generated and adapted text fixtures. More improvements could certainly be made (e.g. testing other locales), but, as is, you’ll now have to try much harder to make excuses for not writing test cases for your Alexa skill!

Source

This article is based on work I did creating Estonia One Hundred, a 100th birthday gift to a country I love, Estonia. For a full working example of the techniques above, see the full source code of Estonia One Hundred.

--

--

Mike Hoolehan

Software dev, camper, baseball player, tabletop gamer. A long way from Terre Haute.