Raspberry Pi Timelapse 5: Timelapse Movie Workflow

Now that there is a set of usable, illuminated images, they can be stitched together to make a timelapse movie. There are numerous ways to accomplish this, but the methods chosen here use only free software, and work on Windows. The basic workflow is as follows:

A) Rename files linearly (optionally, filter by time of day) – Python code here
B) Add a timestamp to each image – FastStone Image Viewer
C) Process the images as a video – FFMPEG

A) Rename files

This was not originally a part of the workflow, but there were two issues: one, the timestamp software did not like the colon characters in the time stamp, and the timelapse video software reads files with a simple wildcard, causing the images to be sorted out of order. Here, scripts were put together to simply sort and rename the files.

The renaming code (batch_rename.py) is designed primarily to remove special characters (i.e, transform ‘2018-02-05_20:40:01.jpg’ to ‘20180205204001.jpg’) but has the option to sequentially renumber the files based upon a defined range (i.e., 600 images would start at ‘001’ and end at ‘600’.) The files are sorted using Python’s built-in sorted() algorithm.

Additionally, images can be picked from a range of hours in a day (daily_subset_and_rename.py), which also calls batch_rename.py to scrub or renumber the final output. This is useful for some timelapse images where light varies throughout the day, so having the same times of day is more consistent.

B) Timestamp

FastStone Image Viewer is free software that can be used to place a timestamp in each image. The software used to capture the images, fswebcam, has a “banner” option where a time stamp could be added; however, the override settings did not work in this instance, and was disabled during image capture. Once the software is installed, here is how to add a timestamp:

  1. Navigate to directory containing renamed images
  2. Select all images
  3. Tools > Batch Convert Selected Images (F3)
  4. Check “Use Advanced Options”
  5. Advanced Options > Text
    1.  For “YYYY-MM-DD”, use “($H2)-($H3)-($H4)”
    2. Set the font size as desired (36-point used here)
    3. Set position (Bottom-Right used here)
    4. Explore other tabs in Advanced Options. A systematic crop (Crop tab) or watermark (Watermark tab) can be added, for example
  6. Output Format > Settings
    1. Make sure full image quality is preserved, unless compression is desired
  7.  Set Output Folder to a new folder (this avoids accidental overwrite
  8. Once all options are selected, click Convert to start processing

C) Create timelapse video

The cross-platform command line software FFMPEG is used to create the timelapse video. There are lots of options that can be used for this software; a good walkthrough of FFMPEG for timelapse videos is here. Here, the following command is used to create the video:

C:\Users\USERNAME\Downloads\ffmpeg.exe -r 90 -start_number 0001 -i %4d.jpg -vcodec libx264 -threads 4 90fps.mp4

A breakdown of the command:

  • C:\Users\USERNAME\Downloads\ffmpeg.exe specifies the path to the executable. This instance of FFMPEG is not installable, so it is called from the Downloads directory.
  • -r 90 specifies the frames per second (fps), set to 90 here.
  • -start_number 0001 tells FFMPEG where to start in file sequence.
  • -i %4d.jpg specifies input filename, using expression %4d to say there’s four digits in the file string, with the extension .jpg following immediately after.
  • -vcodec libx264 specifies the video codec. The Windows builds of FFMPEG should have this codec enabled by default.
  • -threads 4 specifies the number of processing threads to be used.
  • 90fps.mp4 Output file, with extension. This will save the file in the current directory, which can be checked using cd in the Windows command line.

D) Video

A video composed of 1,139 images took about 3 minutes to run using 4 threads in FFMPEG. Only images captured between 10 am and 2 pm each day were used. There were intermittent camera downtimes, causing jumps towards the end of the video. An auxiliary water reservoir was added later, but malfunctioned, causing the plant to wilt; however, it was nursed back to health with some string and constant water supply. Here is the final product:

Also note the progress of the avocado plant, which has grown rapidly after it initially sprouted!


Raspberry Pi Timelapse 4: Sort Images Using Machine Learning

Note: if you would like to jump to the source code and learn from there, it is at: https://github.com/stevefoga/image-classifier. 

Now that the Raspberry Pi is configured and capturing images 24/7, there will inevitably be some undesired images. The plant light only runs 18 hours a day, so some images will be dark. The logical fix could simply be to remove images by small file size, with either a static threshold or rolling standard deviation. However, due to the ambient lighting conditions here, the “dark” images are often the same size as the “light” images. The files in question are visualized below using smoothed histograms of both “dark” and “light” images:


Note the overlap of distributions, indicating a single threshold would not work well in this case. The plot was generated in Python using the os module to get the file sizesmatplotlib to plot the results, and smoothed the histograms using advice found here.

Another option would be to adjust the cron to capture when the light is on; this doesn’t work in practice, as a slight flicker in power, a bump of the power plug, or other unplanned maintenance resets the timer altogether, meaning precious acquisitions will be missed (or unlit conditions will persist) under a static schedule.

Therefore, the best solution is to automatically remove dark images. The images are dark, so perhaps the quickest way would be to calculate each image’s mean, and qualitatively determine an appropriate cutoff value. However, this neglects outliers that may be bright, but not consistent with the desired images for a timelaspe video — otherwise, flickering becomes more apparent.

To perform the classification, a “good” (“light”) set and a “bad” (“dark”) set of training images are selected; these are given to a machine learning algorithm to build a statistical model that will be applied to the remaining images in the dataset. Twenty “good” and twenty “bad” images were picked from the same day. Twenty was picked rather arbitrarily; it took little time to pick twenty of each image type, and all the classifier has to do is discern light from dark. An example of the “good” images selected:


An example of the “bad” images:


The classifier workflow was written following this blog post, which also thoroughly explains the science behind image classification. The code written for this post, including step-by-step instructions, is found here. The program used here is generate_classifier.py. The output of the classifier serialized to a text file (using pickle), as the result is a Python class and not raw data. The only non-standard Python library required to run the code is sklearn, which could be installed several different ways. From there, the program can be called by specifying both input directories, output filename for the classification file, and file extension of the input images (default is ‘.jpg’.) An example:

python generate_classifier.py --pos /path/to/good_images --neg /path/to/bad_images -o /path/to/classification_file

After the classifier is generated, the images can be sorted using sort_images.py. This applies the classifier to a directory with all captured images, and moves (not copy) them into output directories. An example:

python sort_images.py -i /path/to/all_images -e .jpg -m /path/to/classification_file --pos /path/to/good_classified --neg /path/to/bad_classified

Note that all of these programs have a --dryrun flag, which runs the code, but does not make any file modifications.

How well did the classifier work? All of the dark images were moved to the “bad” bin successfully, and does not contain any images where the lamp is lit. However, the “good” bin contains some images with the light off; while the ambient lighting is sufficient, it would still throw off a timelapse video. An example:


Designing a classifier is an iterative approach, so adding this image to the “bad” training set would likely help eliminate this situation.

In future posts, I’ll discuss how to create a timelapse video. There are numerous workflows using free, trial, and/or open source software to accomplish this task. I ended up making some formatting alterations using a few Python scripts, and using Windows-based software to complete the videos.

Raspberry Pi Timelapse 3: Raspbian Configuration & Time Lapse Setup

Raspberry Pi Timelapse 3: Raspbian Configuration & Time Lapse Setup

Now that the hardware is assembled, it is time to configure the Raspberry Pi. This post covers four topics: how to get the Raspberry Pi working (operating system installation, network configuration), how to initially set up and test the capture software, how optimize image capture, and how to automate time lapse capture.

A) Raspberry Pi Software Setup

I opted to use the included SD card, which came with NOOBS. Install the SD card, and plug in the power, a keyboard, and the display — the system simply boots to a screen where the Wi-Fi can be configured and the operating system installed.

The setup is fairly straightforward. I chose to install Raspbian, with no graphical user interface (GUI). Here are the notable things I found when setting up my Raspberry Pi:

  • Before starting, note that the Zero W’s Wi-Fi chip only supports a 2.4 Ghz frequency, as the Zero W does not support 5 Ghz.
  • If the local home router or network can be accessed as administrator, now is a good time to assign a static IP to the Raspberry Pi (example for TP-Link routers); this makes for less guesswork when first attempting to remotely connect to the Raspberry Pi.
  • Set the correct region. My board was set for Great Britain (GB) keyboard, and I am in the United States (US); this made it difficult to set a complex password, as some characters are not mapped the same (Dollar [$] and Pound [£], for example.) Make sure to de-select the GB keyboard, then select the local region’s keyboard — US generally uses “en_US.UTF-8”, per a discussion found here.

Once Raspbian is installed, the remote access can be configured. Conveniently, many of these settings can be configured in Raspbian using raspi-config. This can be accessed by running:

sudo raspi-config

From here, activate Secure Shell (SSH) by going to “Interfacing Option” –> “SSH” –> “Yes” –> “Finish”.

At this point, the setup could be continued remotely, or continue on the monitor. If the former, the connection can be established by going to a terminal in Linux or Mac OS and running:

ssh username@ip-address

In my case, my command looks like this:

ssh pi@

If using Windows, third party software must be obtained to use SSH, such as PuTTY.

A prompt will appear asking to accept the SSH key. Say yes, and then the connection will be established.

If SSH is slow or unresponsive, try adding:

UseDNS no

to the /etc/ssh/sshd_config file, as recommended here.

B) Time Lapse Software Setup & Test

First, make sure the system has all the latest updates by running:

sudo apt-get update
sudo apt-get upgrade

Next, install fswebcam, which is used to control image capture from the webcam by running:

sudo apt-get install fswebcam

Now, images can be captured through the webcam! Make sure it is plugged into an available USB port. There are many settings to be explored, which vary greatly by the surrounding environment and the camera used. Keep in mind that webcams typically do not have the same automatic exposure and aperture control that one is accustomed with a point-and-shoot camera.

At this point, single images can be captured using the command line. A capture can be initiated by running:

fswebcam -d /dev/video0 /home/pi/test1.jpg

This does the following:

  1. Calls fswebcam,
  2. Specifies the device (-d) to use, and
  3. Saves the capture (/home/pi/test1.jpg).

The camera is very likely mounted to /dev/video0 automatically. The active device(s) can be determined by running:

ls /dev/video*

The output image can be viewed by using ssh or rsync to pull the image to the local machine for inspection. The image may look a little less than ideal; see Section C for details on how to optimize image capture.

C) Optimization

In my previous post, I mentioned that the resolution of the camera is not as advertised, but can be determined through the fswebcam software. Through helpful advice found here, this can be accomplished by running:

v4l2-ctl --list-formats-ext

I was able to apply the maximum resolution to my fswebcam command.

For other settings, such as exposure, fswebcam has no knowledge of what the camera is capable of capturing; however, all possible options can be explored by running:

v4l2-ctl --list-ctrls

I found this process was very iterative, and many options were often unresponsive, even when set at their extremes. Many users report that some of these response issues are alleviated by implementing the frame skip (-S) command, and setting it to a large value. An example is discussed here. Here is an example of an image I captured while experimenting with the settings:


In my case, the plant light is rather bright, leading to saturated, blurry images. Through many tests, I came up with the following capture parameters:

sudo fswebcam --no-banner -d /dev/video0 /home/pi/capture/raw/%Y-%m-%d_%H:%M:%S.jpg -r 2304x1536 -s brightness=50% -p YUYV -S 60 -s backlight_compensation=1 -s sharpness=150 -s focus_auto=0

Here’s a breakdown of the command:

  1. sudo – fswebcam seems to ignore some of my settings (e.g., --no-banner) if I do not run as sudo.
  2. --no-banner – disables the banner around the images; I’ll apply my own later.
  3. -d /dev/video0 – specifies device (described in Section B.)
  4. /home/pi/capture/raw/%Y-%m-%d_%H:%M:%S.jpg – path to my saved image, with the system date (e.g., 2017-12-04) and time (e.g., 12:20:00). Consider removing the dashes and colons, as I had issues with Samba mangling the file names when transferring to Windows. Even with mangling disabled, the colons were changed to percent signs.
  5. -r 2304x1536 – resolution set to my webcam’s maximum resolution.
  6. -s brightness=50% – the -s handle allows numerous options to be passed along to the camera; the -list-controls command (see above) lists all the options. Here, the brightness is set to half (50%), which cuts down on the plant light saturation.
  7. -p YUYV – sets the color palette, sometimes called the color model.
  8. -S 60 – number of capture frames to skip. This at least allows the auto focus to complete (I was unable to get manual focus to work with this camera.)
  9. -s backlight_compensation=1 – supposed to help with high contrast between objects in camera’s field of view.
  10. -s sharpness=150 – value I derived iteratively, which seemed to make the images look best in my environment.
  11. -s focus_auto=0 – disabled the auto focus, as manual focus did not seem to work with my camera.

Here’s an example image captured with the above command:


While this image looks better than many of the early calibration images, a more homogeneous background with better light control will always yield cleaner images, and will ultimately look better in a time lapse video. Here is an example image from a better location (and a new plant):


D) Automation

The automation of the image capture consists of two parts: putting the capture command string into an executable file, and creating a cron job to execute the script at a given interval.

First, create a script that can be called to capture the image. Create an empty file with a .sh extension, and place the command within it. This can be done with vim by running:

vim /home/pi/my_capture.sh

then copy and paste the code into the command window. After that, hit Esc to stop text entry, and type :wq to write and quit the edit session.

Next, make the script executable by running:

chmod +x /home/pi/my_capture.sh

Now, the cron job can be set up to run the script at a specific interval. The cron can be set up by running:

crontab -e

Below the comments, enter a line specifying the interval and script to be executed. In this case, I have my camera capture every 20 minutes:

*/20 * * * * /home/pi/my_capture.sh

This can be adjusted to capture at specific time(s) of the day, week, month, etc. A full description of setting the crontab is here.

Note: the cron will always run, as long as the system is powered on. It will only be disabled if the system is powered off, or if the job is commented out in the crontab.

In the next post, I’ll discuss moving images for processing, and building a model to automatically sort and filter them. In future posts, I’ll discuss my time lapse movie creation workflow.

Raspberry Pi Timelapse 2: Hardware Selection & Setup

Raspberry Pi Timelapse 2: Hardware Selection & Setup

To upgrade from my tablet-based timelapse camera setup, I had the following criteria in mind:

  1. Flexible capture software
  2. Low power consumption
  3. Remote access
  4. Versatility to use different cameras

The Raspberry Pi family of computers meets all of these criteria. The Linux operating system (in this case, Raspbian) has the ability to run many open source software packages and utilities. While some of the other Raspberry Pi computers have multiple cores and support numerous peripheries, I needed something that could simply save images and move them over a network. The Raspberry Pi Zero W fulfilled this task, and uses very little power.

Since I started from scratch, I did not acquire the Zero W at the advertised $10 price point. Instead, I obtained a kit that included an 8 GB micro SD card, hard case, mini HDMI (male) to HDMI (female), micro USB (male) to USB type A (female), and a 1 amp micro USB AC power adapter.

I already have a nice USB webcam. However, it is really optimized for video chatting as opposed to still image photography. The advertised still resolution is 15 Megapixels; however, this seems only achievable through the vendor’s image capture software that performs some sort of oversampling. The native resolution of the sensor is apparently only 3 Megapixels; this is verifiable through the video4linux command line tools.

Finally, I acquired a small tripod, and used Velcro to secure the Raspberry Pi and its cables to one leg of the tripod.

The final setup looks like this:


In the next post, I’ll cover the Raspberry Pi configuration, and time lapse camera software setup process. In future posts, I’ll discuss my time lapse movie creation workflow.

Raspberry Pi Timelapse 1: Introduction

Raspberry Pi Timelapse 1: Introduction

We have been growing plants indoors using a small hydroponic system with an array of LED lights. To monitor their progress, along with our other plants sharing the benefit of the lights, I used an Android-powered tablet propped up with its own keyboard case to capture images every 30 minutes:


While it worked well, it was vulnerable to being knocked over and being moved around. Retrieving images and checking its progress was tedious, and also jarred the position of the tablet. After ~3 months of this setup, it produced usable results:

However, there are more elegant solutions that require less power, and are far more customizable. In the next few posts, I’ll explain how I configured a Raspberry Pi Zero W to capture images using a webcam on a tripod.

As of this writing, I have a basic configuration running. In the future, I intend to work with some machine learning to perform quality assurance on the images.

Image source.