Some movement data

Since fixing the Fjalls system we have a steady stream of data – showing the glacier moving:

This shows a movement of around 1.5m in just ten days.

View of the icebergs when the tracker was deployed

this shows the glacier-sea front where icebergs calve (Photo: Formula E)

this shows the glacier-sea front where icebergs calve (Photo: Formula E)

The sea near the glacier - with a variety of icebergs (Photo Formula E)

The sea near the glacier – with a variety of icebergs (Photo: Formula E)

Formula E driving on a glacier in Greenland. This photo shows the icebergs in the sea - like the one we are tracking from here. Photo courtesy Formula E team.

Formula E car driving on a glacier in Greenland. This photo shows the icebergs in the sea – like the one we are tracking from here. Photo courtesy Formula E team.

Glacsweb meets openIMAJ

We have deployed a Brinno TLC100 camera to monitor the flow in the outlet river from the glacier we work on, footage from this camera can be seen in a previous blog post.  However, whilst being simple to set up the output from the camera is not particularly useful for analysis.   It saves the images as an avi file, which is great for the amateur timelapse market, not so good for our purposes.

In order to fix this the file was first run through ffmpeg in order to get separate jpeg files for each image.  However, this then let to the problem of how to extract the time stamp from the image.  Image processing is not area but fortunately the openIMAJ team is based in the same building as us.  I went and had a chat to Jon Hare asking if there was anything suitable available off the shelf, unfortunately the software available did not produce good results.  So Jon went away and within a few hours he had written a custom piece of software to perform the OCR for us.  I then wrapped this in a python script to process a folder and  automatically rename the files with the timestamp and add the relevent data to the database.

Once the script had run we had a collection of about 900 images all the the correct timestamp for the file name, and included in the database to enable us to keep track of what times we had images for.

That was the simple part – the hardest part is yet to come – working out river depth from the images we now have.