Storm in a Teacup


Live wave data is being captured from Hastings Pier and streamed over the world wide web as part of the project www.storminateacup.info . We (Brendan Walker – the artist behind this project – and the collaborating team of Performing Data technologists) are making this data available for the use of other artists and creative technologists to use in their own projects. Access will be for a trial period between April and September 2015. During this period we will continue to develop increasing levels of online support, accessible via this webpage. Please email paul.tennent@nottingham.ac.uk and ask to be kept informed of updates. We also plan to offer closer support for a select few projects of creative and technical merit. We invite proposals and enquiries from individuals and groups. Please email us your ideas to info@aerial.fm.

How Storm In A Teacup Works


Storm in a teacup is part of the internet of things, and uses a cloud computing model to deliver data to its various receivers.


Range Sensor

Developed at Middlesex University, the range sensor uses a laser pointed at the water to determine the distance between the sensor (mounted on the pier) and the water. It makes this check ten times a second, and sends the data wirelessly, using the zigbee protocol back to the end of the pier.

Data Processor


Next, that range value, is used to generate various streams of data:

  • Wave: An inversion of the range value, this is the true representation of the wave
  • Tide: Calculated from an average of the last several values for water height.

The wave is the separated out using bandpass filters into various constituent frequencies to get the wave, swell, and ripple. Each of these frequencies reports as a wave, a frequency and an amplitude.

  • Broad (1-30 seconds)
  • Lower (1-5 seconds)
  • Middle (5-15 seconds)
  • Upper (15-30 seconds)

These streams are then published to the internet using a system developed by Paul Tennent and Michael Margolis, based on an Arduino Yun via a cloud messaging protocol called Pubnub.



Receivers can be anything that wants to subscribe to the data stream. This might be a physical object like the ones you see in the hub (shown in the design sketches above), built by Mike Golembewski, or perhaps a website, such as this one, developed by Tony Glover.

How You Can Start To Access The Data Stream For Your Own Projects

To access the data, you’ll need to make use of Pubnub. There are SDKs for lots of popular programming languages – see here.

All you need is the subscribe key to get access to our channel:


And the name of the channel:


Next you’ll need to understand the format of the data. It comes in JSON formatted packets every five seconds, each containing five seconds of data at a data rate of 10Hz – so each packet will contain ~50 data points per stream – so ~700 per packet.

The data is formatted as follows:

There are three collections:

  • channeltypes – contains an ordered array with a list of the data types for each channel (all floats in this case)
  • channelnames – contains an ordered array with the list of channel names
  • data – contains an ordered array of arrays, where the first value of the inner array is the timestamp in seconds of when the data was written, and the second is another ordered array containing the values for the 14 channels.


A typical message might look like this.


Brendan Walker, Mike Golembewski, Tony Glover, , Paul Tennent, Paul Harter, Nick Weldin, Michael Margolis

Source code for this HTML wave player is available here.