Skip to content

Time-lapse photography is cool, and I like both photography and cool things, so I made my own time-lapse camera based on a Raspberry Pi Zero W. That's a little computer that's half the size of a credit card with built-in WiFi -- it's very similar to what's in your phone without the telephony or the screen, but with a good camera available as an add-on.

Other people have done similar things, but when I started this project I didn't see any other projects implementing exactly what I wanted, and the solution to that is to do my own.

There are hardware and software components to this project, obviously. In this article I'll describe the hardware design, but I'm not going to provide a pin-by-pin guide for soldering this together. If you have experience with adding buttons and LEDs to a microcontroller (it's not hard to learn), you could design your own. Basically it's a Raspberry Pi computer with built-in WiFi and an add-on camera, and to which are connected the equivalent of seven buttons, one LED, and a two-line monochrome LCD character display.

The software and system configuration will be described in a follow-up article. That's where all the functionality is implemented, where I provided for some cool features, and where further improvement could be made. The hardware just gives the software the tools needed to interact with the user and with the photographic subject.

The back

As you can see, there are three buttons, a number wheel, and an LCD display on the back for the user interface. Using a transparent back meant that I could solder the LCD display to the circuit board within the case rather than attach it to the project case back, so no extension of wires to the display were needed. That made fabrication much easier since the LCD alone has twelve connections, and it's easier to make a point-to-point connection on the circuit board than it is to solder free wires directly to the LCD board.

The LCD display has two lines of 32 characters. It's a very common and inexpensive component, and the AdaFruit software libraries for the Pi make interfacing to it easy. I use it to display how much free space is left on the SD card, how many frames it's taken in the current sequence, for setup menus, etc.

The red button stops and starts the taking of a timed sequence of images. When in setup mode it's used for selecting a setting. The leftmost button enters setup mode and cycles through the menus for resolution, shot interval, video frame rate, etc. The middle button cycles through the list of options for one menu.

A cool feature is the number wheel on the upper-right. This wasn't in my original plans, but it leaped out at me at the electronics store. It scrolls through the digits 0 to 9, and I use it in conjunction with the setup menu to set the number of seconds between frames. Select Seconds or 10 Seconds or Minutes in the setup, then set the number of those units with this wheel. It has four contacts with a common ground that are read by the Pi just as if they were buttons, and the value is determined by treating them as four binary digits. Unlike the menu settings, this can be changed while a time-lapse sequence is in progress if for any reason I want to adjust the interval.

The front

There's nothing on the front but the camera lens poking through a hole in the case where it's glued in place from the inside. The camera is the standard original camera for the Raspberry Pi which connects to the Pi's board with a flat ribbon cable. There is no viewfinder or video preview screen for the camera, but I can use my phone for that with WiFi.

The inside

The Pi is attached with posts and screws to the front of the case. A circuit card "hat", is attached to the GPIO pins, providing a place to attach other electronics and to connect them to the GPIO pins. The LCD display is soldered to the hat with a set of pins. The buttons and wheel are attached to the case and connect to the hat with individual wires. There is a red LED on the board, which the program blinks whenever a frame is taken. You can see peeking over the LCD display a little photoresistor connected to a pin on the display that controls its brightness.

The "hard drive" for both the operating system and the captured images and videos is a 64 GB micro SD card. That's inserted into a slot on the Pi, which is at the bottom of this stack of components. It is not reachable without removing everything from the case, and that could be difficult to do since the camera is glued onto the front of the case. There is no access to the Pi's USB port, so all access to the device and the images and videos it creates is via WiFi. I must be careful when making any modifications to the system to not break its ability to at least boot to the point that WiFi is up and properly configured. Otherwise I'd need to disassemble it to get to the system on the SD card or to connect a keyboard and monitor.

Power and mount

Power is through a USB cable coming through a hole in the side of the case, better seen in the image of the front above. It has a USB Type-A plug so that it can be plugged into a power bank for portable power. My 10,000 mAh power bank easily keeps it running for several hours. Captures that take longer than that require wall power for which a USB Type-A extension cable is needed

It sits on a base made from FORMcard thermoplastic which was shaped around a bolt to make threads for attachment to a standard tripod. A buckled strap was set into the plastic when it was soft, and that holds the camera in place. I added a little epoxy to help secure the strap to the base, and some of that seems to have seeped through and permanently attached the camera case to the base, too. That wasn't my plan, but since a time-lapse pretty much requires a stable base, it works fine to just keep it on a lightweight tripod. The legs on this one can extend to a normal height. A plastic baggie hanging on the tripod handle looks a little ghetto but serves as a holder for the power bank.

Software and system

What all this can do when put together is determined by the software. It looks like a camera, but it's really a computer with a camera module attached. The software is on GitHub. An article describing the software's design and use will be in a follow-up article here soon.

Originally published 2012-03-23.

Just starting out on your own? Brown-bag it for lunch and save $5 per day 5 times a week for 40 years. Invest that $5 each day, and how much will you have when you retire?

Savings per lunch: $
Years of lunching:
Investment interest: %

It's hard to find a reasonable lunch anywhere for much under $6 or $7, and it's easy to spend much more. But a good home-made lunch can be made for only a dollar or two, so buying your lunch instead of bringing it from home can easily cost you $5 or more every day.

That's only $25 per week, though, and what's that in the grand scheme of things?  Over the course of a 40-year work career it turns out to be a lot.

It adds up even more -- a lot more -- if you consider the amount you could accumulate by investing the savings and compounding the earned interest. A bank savings account earns a percentage point or so these days. The total return on the S&P 500 stock index from 1950 to 2009 came out to 11% annually. A few savvy investors do better, and many do worse. Pick a rate of return, a term, and how much you can save each lunch, and let the calculator tell you what that burger and fries is really costing you.

This uses the formula for periodic compounding found on Wikipedia and makes the assumption that you do not save extra lunch money on weekends. It is rather simple-minded in that it does not figure out the number of days until the next compounding period. That is to say, if you choose to compound quarterly (4 times a year), it just plugs that interval into the formula without trying to figure out on which calendar days the interest will be calculated.  Your actual return may vary slightly, but  you can still see that it pays well to save your lunch money.

What this doesn't tell you is what years of inflation will do to your savings, nor does it consider taxes. Those stories are not nearly so nice.

Feel free to inspect the JavaScript code doing the calculation here:

 * $Revision: 1.1 $
 * $State: Exp $
 * ----------------------------------------------------------------------------
 * From
 * Copyright (C) 2011 by
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 * ----------------------------------------------------------------------------
 * Just starting out on your own?
 * Brownbag it for lunch and save $5 per day 5 times a week for 40 years.
 * Invest that $5 each day, and how much will you have when you retire?
 * C.f.:
 * ----------------------------------------------------------------------------

function CompoundReturn (years, interest, periodicity, saved) {
    var sum = 0 ;

    var wday = 0 ;
    var today ;
    var totdays = Math.floor(years * 365.25) - 1 ;  // a 1/4 day accounts for leap years
    var r = interest / 100.0 ;

    for (today = 0 ; today <= totdays ; today++) {

            if (wday != 5 && wday != 6) {   // skip weekends!
                    sum += saved * Math.pow( (1 + (r / periodicity) ), (periodicity * ( (totdays-today)/365.25) ) ) ;
            wday++ ;  if (wday >= 7)  wday = 0 ;

    sum = Math.round (sum * 100) / 100 ;
    // sum = Math.round (sum) ;
    sumstr = sum.toLocaleString('en-US') ;
    var resultstring = "$" +saved+ " saved per day, 5 days a week, and invested at " +interest+ "% compounded " +periodicity+ " times per year.<br />After " +years+ " years, retire with about <b class='holycow'>$" +sumstr+ "</b> in the bank!" ;

    document.getElementById('results').innerHTML = resultstring ;


Originally published 2009-03-31.

This is a simple JavaScript calculator for figuring out world files. Enter the coordinates of opposing corners of the image. Geographic coordinates should be positive with N/S and E/W chosen for direction from the equator and the prime meridian. For UTM enter either positive or negative numbers for northing and easting, and leave the selectors on N and W. The results can be copied and pasted into your world file. Rotations are not handled in this implementation, so the second and third values will always be zero.

First corner:

Second corner:

Image size:  width: px     height: px



Originally published 2009-03-26.

Long story - short

To project the individual frames for this video I used:

gdalwarp -tr 10000 10000 -s_srs EPSG:4326 -t_srs EPSG:2163 nation.gif nationproj.tif

Long story - long

I finally figured out some of the mojo necessary to get some good out of gdalwarp. For me the key was to understand the "spatial reference system" (SRS) parameters. Key to understanding that was grokking the "+proj" notation used in the proj program.

It helped that I already had a handle on some of the basics of map projections. Geek that I am, one of the titles on my light reading list is Flattening the Earth: Two Thousand Years of Map Projections by John P. Snyder, who appears to be the go-to man for map projections. Along with interesting discussion of the pros and cons of each projection, he also gives the formulas for the forward projection of each. Using those formulas (and perlMagick) I wrote my own working but painfully slow perl programs to implement a couple of conic projections. But with almost fifty thousand images to project those programs proved entirely impractical, if educational. I had plenty of time to dive into gdalwarp while my programs churned away for weeks on end.

That trivia aside, here is what you need to know, or what I needed to know, to get started with gdalwarp. Warning: I am not a professional geographer or cartographer or anything of the sort. My use and understanding of some of the technical terms may make such a professional wince. But we don't care because we just want to make the software work for us, regardless of how fractured the information bouncing around inside our heads is.

Spatial Reference systems & projection parameters

Gdalwarp uses a source SRS to interpret the geometry of the source image, and it uses a target SRS to know about how you want to reproject it. These can be specified in "proj" format as described in the documentation for the proj program. At first glance that stuff looks a little like heiroglyphics, but a little knowledge of map projections helps to bring it our of the fog. For example, the formulas for the Albers Equal Area projection require as constants a central meridian and two standard parallels of latitude. Both the proj program and gdalwarp have the Albers Equal Area projection, among many others, built in. You can specify it with "+proj=aea" followed by the parameters it needs. E.g., "+proj=aea +lon_0=90w +lat_1=20n +lat_2=60n" for a projection centered on 90 degrees west longitude with standard parallels 20 and 60 north.

Fortunately, a shorthand notation is available for common projections. You do not have to remember and parrot the parameters for the Lambert Azimuthal Equal Area projection used by the National Atlas of the U.S. ("+proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +a=6370997 +b=6370997 +units=m +no_defs"), or know that the Cassini projection used for the Kertau 1968 Singapore Grid uses "+proj=cass +lat_0=1.287646666666667 +lon_0=103.8530022222222 +x_0=30000 +y_0=30000 +a=6377304.063 +b=6356103.038993155 +towgs84=-11,851,5,0,0,0,0 +units=m +no_defs". Those parameters are built into gdalwarp under the handy EPSG ids of 2163 and 24500, respectively. (I assume that second one is, though I haven't tried it… I'd certainly like it available simply as EPSG:24500 if I needed it.)

You do need a source for those EPSG ids. I find the projection setup in Quantum GIS to be one handy source since it's already on my desktop. That's where I found the Singapore example. Online sources include the EPSG Geodetic Parameter Registry and

World files

There is one more bit of information about the source image that gdalwarp needs to do its thing. There must be some way for the program to determine for each source image pixel location the corresponing x and y value in the source SRS -- that the pixel at [112,14] is for 32.25N 105.10W, e.g. If you have a GeoTIFF file as your source, you're in luck, because all the information needed is there. If it's a GIF or PNG or JPG file, or a non-Geo TIFF file, there is fortunately a way to provide that information, if you have it, in the form of a world file. A world file has the same base name as the image file and a .suffix of .gfw, jfw, .pgw or tfw for a GIF, JPG, PNG or TIFF file, respectively.

World files won't work for input images in which the magnitude of the x and y values differ across the image. It fits the bill for rectangular latitude/longitude non-projected images, and it's perfect for UTM. The format of a world file is simple and is described in that Wikipedia article. The situation is rather trivial if your input file is a simple non-rotated grid of equally spaced latitude and longitude coordinates, like this image:

That is an unprojected image with a constant change of longitude for each x pixel and a constant change of lattitude for each y pixel. If you know the coordinates of the upper-left corner of the image, if it's not rotated, and if you know its pixel dimensions, and if you remember the arithmetic you learned in grade school, you can easily figure out the values for the world file. (If it's rotated, study that Wikipedia world file article for me.)

When I was using my homemade projection program I guestimated the latitude and longitude of opposing corners of that radar image. I didn't realize then that the NWS has given us exact values in the form of...a world file! In this directory listing on the NWS website there are world files for many of the images they supply there. They do not supply the value for the small image that I'm using, but the lat-lon of the corners is the same as for the large image, so simple arithmetic gives me the degrees per pixel values I need to change in the world file. It comes out looking like this:


Update: I have created an online calculator for world files.

Putting it all together

In order to use the simple non-projected lat-lon format of these images you need to specify a corresponding source SRS for gdalwarp, and fortunately one is defined as EPSG id 4326. [Update: see this link, which I'm still trying to digest.] So we're getting close to finishing that gdalwarp command line at the top of this page. Here is the explanation of each part of the command:

Parameter Significance
-tr 10000 10000 Hmm. I knew you were going to ask about this one. The gdalwarp manpage describes this as "output file
resolution (in target georeferenced units)". It determines the size of the output, and I found a reasonable set of values
to use purely by trial and error.  Further experimentation shows that you can leave it off and gdalwarp will use reasonable defaults.
-s_srs EPSG:4326 Source is a lat-lon grid of pixels
-t_srs EPSG:2163 The destination image has the same projection as the National Atlas for the resulting image. If it's good enough for them"
nation.gif My input image, from the NWS website, is a copy of latest_Small.gif.
nationproj.tif The target file, which will be created as one of those handy GeoTIFF files that other GIS programs can use.

That's all it took for me to warp that national radar image into a reasonable projection. In hindsight I should have used an equidistant projection instead of the equal area, so that the speed of the storms might be more constant across the whole continent. But at this scale it probably doesn't matter too much:

I converted the TIFF to PNG for the web, but gdalwarp can write a GIF file directly if you want. By the way, you can tell that this image is wider than what you see in the video. This is the correct projection. When I created the original MPEG file I preserved this shape in a 16:9 aspect ratio frame, same as HDTV, but it looks like after all this YouTube mashed it into a 4:3 area. That's a topic of investigation for another day.

I have used gdalwarp on other images. I've edited a GeoTIFF image in the GIMP, which does not preserve the georeferencing tags. I've added them back with gdalwarp: The listgeo program can create a world file from the original GeoTIFF, and that world file can then be used to reproject the edited file back into a GeoTIFF (using the same SRS for source and destination). Both listgeo and gdalinfo, by the way, can give you information about a GeoTIFF file if you need it, including the EPSG ids of the projection in use.

I'll finish this overlong treatise with a warning: gdalwarp is picky about the format of the TIFF files it will read. Any GeoTIFFs you find ought to already be of the proper form. Otherwise it sometimes likes to complain in cryptic ways about bands and such. In such a case, Google is your friend, even if gdalwarp refuses to be.

Final note

I'm not an expert! If you made it this far you now know practically as much as I do on the subject of gdalwarp. If this leaves you with further questions, remember that Google really is your friend.

This site was lost when the version of WordPress and/or PHP being used got to be too old. I still have access to the database containing the old articles and will be using that to restore some that I found most useful. There may be a new article posted from time to time, also. Just be aware that months or even years between posts doesn't necessarily mean that this site is moribund.