Roll 13, 14

Still shooting film. Still enjoying it. If I get one good shot out of a roll I’m happy.

Roll 13/02

Roll 13/31

Roll 14/23

I’ve been using Eye Culture in Bethnal Green for processing and scans. High res JPEGs are about £6/roll. They do a good job.

Facts Not Opinions

Facts not opinions

Facts not opinions, inscribed on Kirkaldy Testing Museum.

1. I love Matt Edgar’s posts about historical engineers. It’s like a mini In Our Time, without the posh people.
2. I’d like to see more mottos inscribed on buildings. Mottos, not slogans.

Recent quotes

If a cook touches a sauce, it gets passed through a sieve.

Love that.

Sometimes all you need is for someone to see what you are planning and not look bemused.

On Bill Drummond and Jimmy Cauty, from the book about the KLF with the long subtitle.

Perhaps more than anything they did, The Manual led to the pair being perceived as cynical media manipulators rather than random followers of chaos. In a sense, this was always inevitable when they became successful because the public narrative believes that success comes from knowing what you are doing. The equally common phenomenon of stumbling upwards is rarely recognised.

From the same book.

Almost as much a condo as a car.

From this video about the Pontiac Stinger, found via Fosta. If anyone wants me to do a presentation about feature creep, I am ready now.

Measures / Countermeasures

Presence Orb

A couple of days ago it appeared that a company called Renew are trialling tracking smartphones using a device called the Presence Orb (“A cookie for the read world”) placed in recycling bins across the City of London.

The mechanism by which they do this hasn’t been discussed in much detail in the media, so I thought I’d have a go at unpacking it.

Every device that connects to a wired or wireless network has a Media Access Control (MAC) address, that uniquely identifies it. It’s unique worldwide, and written into the read-only memory on a chip.

The MAC address is 48-bit number, usually described as six pairs of hexadecimal digits, such as 01:23:45:67:89:ab. The first three pairs are a manufacturer specific prefix, so Apple has 00:03:93, Google has 00:1A:11 and so on. The list is freely available and many manufacturers have multiple prefixes.

MAC addresses are only used for communicating within devices on the same network, and they’re not visible outside of that (on the wider internet, for example).

Your smartphone, knowing that you prefer to be on a fast wireless network rather than 3G, will regularly ask nearby wireless networks to announce themselves, and if it finds one you’ve saved previously, it’ll connect to it automatically.

This is called a probe request, and like all packets your device sends it contains your MAC address, even if you don’t connect to a network. They’re sent every few seconds if you’re actively using the phone, or every minute or so if it’s on standby with the display off. The frequency varies between hardware and operating system.

By storing the MAC address and tracking the signal strength across a number of different receivers, you can get an estimate of positioning, which will be reasonably accurate along a 1D line like a street, and less so in a more complex 2D space.

Renew claim that during a single day in the trial period they detected 106,629 unique devices across 946,016 data points from 12 receivers. Given the entire working population of the City of London is ~350,000, that seems high, but I guess they’re not mentioning that they pick up everyone in a passing bus, taxi or lorry too.

They make sure to mention that the MAC address doesn’t reveal a name and address or other data about an individual, but because your MAC address never changes, as coverage grows you could be tracked from your bus stop, to your tube station, and out the other side, to your office. And then that could be correlated against store cards using timestamps to tie to a personal identity. Or perhaps you’d like to sign into a hotspot using Facebook? And so on.

Of course, you can opt-out.

But here’s the thing. Even though the MAC address is baked into the hardware, you can change it.

On OS X, you’d do something like this to generate a random MAC address:

openssl rand -hex 6 | sed 's/(..)/1:/g; s/.$//'

And then to change it:

sudo ifconfig en0 ether 00:11:22:33:44:55

Replacing 00:11:22:33:44:55 with the address you just generated. On most devices it’ll reset back the the default when the machine is restarted.

(This technique will also let you keep using hotspots with a 15 minute free period. Just rotate your MAC and connect again.)

To all intents and purposes, you are now on a different device on the network. And if you didn’t care about using your wireless connection for anything useful, you could run a script to rotate that every few seconds, broadcasting probe requests to spoof any number of devices, from any manufacturer(s) you wish to be.

The same packet that contains your MAC address also contains a sequence number, an incrementing and rotating 12-bit integer used to check ordering of messages. The Presence Orb could use that to discard a series of messages that follow a sequence. In turn, you could randomise that.

They might check signal strength, discarding multiple messages with similar volume. In turn, you’d randomise power output, appearing to be a multitude of distances from the receiver.

Then there’s beam forming antennas, scattering off buildings and so on, to ruin attempts to trilaterate your signal. Stick all this on a Raspberry Pi, put it in a box, plug it to a car battery, tuck it under an alcove, and walk away.

If you ensure your signal strength is kept within bounds, and traffic low enough not to disrupt other genuine users of nearby wireless networks, I believe this is legal, and it’d effectively ruin Renew’s aggregate data, making traffic analysis impossible.

It’s still unclear whether what Renew is doing is legal. I am not a lawyer, but I suspect we’d need a clarification from the ICO as to whether the combination of MAC address and location is personal information and regulated by the Data Protection Act, as is suggested by the EU’s Article 29 Working Party.

It seems likely that the law will be a step behind location tracking technology, for a while at least. And while that’s the case, chaff is going to be an important part of maintaining privacy. The tools are there to provide it, if we want to.

Project Looking Glass


Newspaper Club has two offices: one in Glasgow and one in London. Glasgow is the HQ, where all the customer service, logistics and operational stuff happens. And in London, we develop and manage all the products and services, designing the site, writing code and so on.

We chat all day long between us in a couple of Campfire rooms, and we’re not at the size where that’s a bottleneck or difficult to manage. But it’s nice to have a more ambient awareness of each other’s comings and goings, especially on the days when we’re all heads down in our own work, and there isn’t as much opportunity to paste funny videos in Campfire.

I wanted to make a something to aid that. A two-way office-to-office video screen. I wanted it to be always on, with no dialling up required, and for it to automatically recover from network outages. I wanted the display to be big, but not intrusive. I didn’t want a video conference. I didn’t want people to be able to log in from home, or look back through recorded footage. I wanted to be able to wave at the folks in the other office every morning and evening, and for that feel normal.

Here’s what we came up with:

Looking Glass #3

Looking Glass #2

There’s a Raspberry Pi at each end, each connected to a webcam and a monitor. You should be able to put a pair together for under <£150, if you can find a spare monitor or two. There’s no sound, and the video is designed to look reasonable, while being tolerant of a typical office’s bandwidth constraints.

Below, I’ll explain how you can make one yourself.

There’s obvious precedence here, the most recent of which is BERG’s Connbox project (the writeup is fantastic — read it!), but despite sharing studio space with them, we’d never actually talked about the project explicitly, so it’s likely I just absorbed the powerful psychic emanations from Andy and Nick. Or the casual references to GStreamer in the kitchen.

Building this has been a slow project for me, tucked into odd evenings and weekends over the last year. It’s been through a few different iterations of hardware and software, trying to balance the price and availability of the parts, complexity of the setup, and robustness in operation.

I really wanted it to be cheap, because it felt like it should be. I knew I could make it work with a high spec ARM board or an x86 desktop machine (that turned out to be easy), but I also knew all the hardware and software inside a £50 Android phone should be able manage it, and that felt more like the scale of the thing I wanted to build. Just to make a point, I guess.

I got stuck on this for a while, until H264 encoding became available in Raspberry Pi‘s GPU. Now we have a £25 board that can do hardware accelerated simultaneous H264 encoding/decoding, with Ethernet, HDMI and audio out, on a modern Linux distribution. Add a display and a webcam, and you’re set.

The strength of the Raspberry Pi’s community is not to be understated. When I first used it, it ran an old Linux kernel (3.1.x), missing newer features and security fixes. The USB driver was awful, and it would regularly drop 30% of the packets under load, or just lock up when you plugged the wrong keyboard in.

Now, there’s a modern Linux 3.6 kernel, the USB driver seems to be more robust, and most binaries in Raspbian are optimised for the CPU architecture. So, thank you to everyone who helped make that happen.

Building One Yourself

The high level view is this: we’re using GStreamer to take a raw video stream from a camera, encode it into H264 format, bung that into RTP packets over UDP, and send those at another machine, where another instance of GStreamer receives them, unpacks the RTP packets, reconstructs the H264 stream, decodes it and displays it on the screen.

Install Raspbian, and using raspi-config, set the GPU to 128MB RAM (I haven’t actually tried it with 64MB, so YMMV). Do a system upgrade with sudo apt-get update && sudo apt-get dist-upgrade, and then upgrade your board to the latest firmware using rpi-update, like so:

sudo apt-get install git-core sudo wget -O /usr/bin/rpi-update &amp;&amp; sudo chmod +x /usr/bin/rpi-update 

Reboot, and you’re now running the latest Linux kernel, and all your packages are up to date.

GStreamer in Raspbian wheezy is at version 0.10, and doesn’t support the OMX H264 encoder/decoder pipeline elements. Thankfully, Defiant on the Raspberry Pi forums has built and packaged up GStreamer 1.0, including all the OMX libraries, so you can just apt-get the lot and have it up and running in a few seconds.

Add the following line to /etc/apt/sources.list:

deb . main 

And then install the packages:

sudo apt-get update sudo apt-get install gstreamer1.0-omx gstreamer1.0-plugins-bad  gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly  gstreamer1.0-tools gstreamer1.0-x 

We also need the video4linux tools to interface with the camera:

sudo apt-get install v4l-utils v4l-conf 

To do the actual streaming I’ve written a couple of small scripts to encapsulate the two GStreamer pipelines, available in tomtaylor/looking-glass on GitHub.

By default they’re set up to stream to on port 5000, so if you run them both as the same time, from different consoles, you should see your video pop up on the screen. Even though this doesn’t look very impressive, you’re actually running through the same pipeline that works across the local network or internet, so you’re most of the way there.

The scripts can be configured with environment variables, which should be evident from the source code. For example, HOST="" ./ will stream your webcam to to the default port of 5000.

To launch the scripts at boot time we’ve been using daemontools. This makes it easy to just reboot the Pi if something goes awry. I’ll leave the set up of that as an exercise for the reader.

You don’t have to use a Raspberry Pi at both ends. You could use an x86 Linux machine, with GStreamer and the various codecs installed. The scripts support overriding the encoder, decoder and video sink pipeline elements to use other elements supported on your system.

Most x86 machines have no hardware H264 encoder support, but we can use x264enc to do software encoding, as long as you don’t mind giving over a decent portion of CPU core. We found this works well, but needed some tuning to reduce the latency. Something like x264enc bitrate=192 sync-lookahead=0 rc-lookahead=10 threads=4 option-string="force-cfr=true" seemed to perform well without too much lag. For decoding we’re using avdec_h264. In Ubuntu 12.04 I had trouble getting eglglessink to work, so I’ve swapped it for xvimagesink. You shouldn’t need to change any of this if you’re using a Pi at both ends though – the scripts default to the correct elements.

Camera wise, in Glasgow we’re using a Logitech C920, which is a good little camera, with a great image, if a touch expensive. In London it’s a slightly tighter space, so we’re using Genius Widecam 1050, which is almost fisheye in angle. We all look a bit skate video, but it seemed more important to get everyone in the shot.

You’ll probably also need to put these behind a powered USB hub, otherwise you’ll find intermittent lockups as the Raspberry Pi can’t provide enough power to the camera. It’s not the cheapest, but the Pluggable 7-port USB hub worked well for us.

The End

It works! The image is clear and relatively smooth – comparable to Skype, I’d say. It can be overly affected by internet weather, occasionally dropping to a smeary grey mess for a few seconds, so it definitely needs a bit of tuning to dial in the correct bitrate and keyframe interval for lossy network conditions. It always recovers, but can be a bit annoying while it works itself out.

And it’s fun! We hung up a big “HELLO GLASGOW” scrawled on A3 paper from the ceiling of our office, and had a good wave at each other. That might get boring, and it might end up being a bit weird, and if so, we’ll turn it off. But it might be a nice way to connect the two offices without any of the pressures of other types of synchronous communication. We’ll see how it goes.

If you make one, I’d love to hear about it, especially if you improve on any of the scripts or configuration.


Roll Five

This is the first roll of film that I’ve processed and scanned by my own fair hand. I think film processing falls in the same bucket as audiophilia — lots of kit, thousands of variables to fiddle with, and just enough science to justify almost any decision you want.

Roll 5/16

Roll 5/23

Roll 5/25

Roll 5/33

Roll 5/34

Roll 5/36

Print Production with Quartz and Cocoa

I wrote a post on the Newspaper Club blog the other day about ARTHR & ERNIE, our systems for making a newspaper in your browser.

One of the things I touched on was Quartz, part of Cocoa’s Core Graphics stack, and how we use it to generate fast previews and high-quality PDFs on the fly, as a user designs their paper.

If you need to do something similar, even if it’s not in real-time, Quartz is a great option. Unlike the PDF specific generation libraries, such as Prawn, it’s fast and flexible, with a great quality typography engine (Core Text). And unlike the lower-level rasterisation libraries, like Cairo and Skia, it supports complex colour management with CMYK support. The major downside is that you need to run it on Mac OS X, for which hosting is less available and slightly arcane.

It took a lot of fiddling to understand exactly how to best use all the various APIs, so I thought it might be useful for someone if I just wrote down a bit of what I learnt along the way.

I’m going to assume you know something about Cocoa and Objective-C. All these examples run on Mac, but apart from the higher level Core Text Layout System, the same APIs should be available on iOS too. They assume ARC support.

Generating Preview Images

Let’s say we have an NSView hierarchy, containing things like an NSImageView or an NSTextView.

Generating an NSImage is pretty easy – you render the NSView into an NSGraphicsContext backed by an NSBitmapImageRep, like so:

– (NSImage *)imageForView:(NSView *)view width:(float)width { float scale = width / view.bounds.size.width; float height = round(scale * view.bounds.size.height); NSString *colorSpace = NSCalibratedRGBColorSpace; NSBitmapImageRep *bitmapRep; bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil pixelsWide:width pixelsHigh:height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:colorSpace bitmapFormat:0 bytesPerRow:(4 * width) bitsPerPixel:32]; NSGraphicsContext *graphicsContext; graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep]; [graphicsContext setImageInterpolation:NSImageInterpolationHigh]; CGContextScaleCTM(graphicsContext.graphicsPort, scale, scale); [pageView displayRectIgnoringOpacity:view.bounds inContext:graphicsContext]; NSImage *image = [[NSImage alloc] initWithSize:bitmapRep.size]; [image addRepresentation:bitmapRep]; return image; }

You can then convert this to a JPEG or similar for previewing.

NSBitmapImageRep *imageRep = [[image representations] objectAtIndex:0]; NSData *bitmapData = [imageRep representationUsingType:NSJPEGFileType properties:nil];

Generating PDFs

Generating a PDF is easy too, given an NSArray of views in page order.

– (NSData *)pdfDataForViews:(NSArray *)viewsArray { NSMutableData *data = [NSMutableData data]; CGDataConsumerRef consumer; consumer = CGDataConsumerCreateWithCFData((__bridge CFMutableDataRef)data); // Assume the first view is the same size as the rest of them CGRect mediaBox = [[views objectAtIndex:0] bounds]; CGContextRef ctx = CGPDFContextCreate(consumer, &mediaBox, nil); CFRelease(consumer); NSGraphicsContext *gc = [NSGraphicsContext graphicsContextWithGraphicsPort:ctx flipped:NO]; [viewsArray enumerateObjectsUsingBlock: ^(NSView *pageView, NSUInteger idx, BOOL *stop) { CGContextBeginPage(ctx, &mediaBox); CGContextSaveGState(ctx); [pageView displayRectIgnoringOpacity:mediaBox inContext:gc]; CGContextRestoreGState(ctx); CGContextEndPage(ctx); } ]; CGPDFContextClose(ctx); CGContextRelease(ctx); return data; }

Quartz maps very closely to the PDF format, making the PDF rendering effectively a linear transformation from Quartz’s underpinnings. But Apple’s interpretation of the PDF spec is odd in ways I don’t quite understand, and can cause some problems with less flexible PDF parsers, such as old printing industry hardware.

To fix this, we post process the PDF in Ghostscript, taking the opportunity to reprocess the images into a sensible maximum resolution for printing (150 DPI in our case). We end up with a file in PDF/X-3 format, a subset of the PDF spec recommended for printing.

– (NSData *)postProcessPdfData:(NSData *)data { NSTask *ghostscriptTask = [[NSTask alloc] init]; NSPipe *inputPipe = [[NSPipe alloc] init]; NSPipe *outputPipe = [[NSPipe alloc] init]; [ghostscriptTask setLaunchPath:@”/usr/local/bin/gs”]; [ghostscriptTask setCurrentDirectoryPath:[[NSBundle mainBundle] resourcePath]]; NSArray *arguments = @[ @”-sDEVICE=pdfwrite”, @”-dPDFX”, @”-dSAFER”, @”-sProcessColorModel=DeviceCMYK”, @”-dColorConversionStrategy=/LeaveColorUnchanged”, @”-dPDFSETTINGS=/prepress”, @”-dDownsampleColorImages=true”, @”-dDownsampleGrayImages=true”, @”-dDownsampleMonoImages=true”, @”-dColorImageResolution=150″, @”-dGrayImageResolution=150″, @”-dMonoImageResolution=150″, @”-dNOPAUSE”, @”-dQUIET”, @”-dBATCH”, @”-P”, // look in current dir for the ICC profile referenced in @”-sOutputFile=-“, @””, @”-” ]; [ghostscriptTask setArguments:arguments]; [ghostscriptTask setStandardInput:inputPipe]; [ghostscriptTask setStandardOutput:outputPipe]; [ghostscriptTask launch]; NSFileHandle *writingHandle = [inputPipe fileHandleForWriting]; [writingHandle writeData:data]; [writingHandle closeFile]; NSFileHandle *readingHandle = [outputPipe fileHandleForReading]; NSData *outputData = [readingHandle readDataToEndOfFile]; [readingHandle closeFile]; return outputData; }
[/objective-c] is a Postscript file, used by Ghostscript to ensure the file is X/3 compatible. It looks a bit like this:
%! systemdict /ProcessColorModel known { systemdict /ProcessColorModel get dup /DeviceGray ne exch /DeviceCMYK ne and } { true } ifelse { (ERROR: ProcessColorModel must be /DeviceGray or DeviceCMYK.)= /ProcessColorModel cvx /rangecheck signalerror } if % Define entries to the document Info dictionary : /ICCProfile (JapanColor2002Newspaper.icc) def % Customize or remove. [ /GTS_PDFXVersion (PDF/X-3:2002) % Must be so (the standard requires). /Creator (ARTHR) /Producer (ERNIE: Expertly Rendered Newspaper Internet Engine) /Trapped /False % Must be so (Ghostscript doesn’t provide other). /DOCINFO pdfmark % Define an ICC profile : currentdict /ICCProfile known { [/_objdef {icc_PDFX} /type /stream /OBJ pdfmark [{icc_PDFX} <</N systemdict /ProcessColorModel get /DeviceGray eq {1} {4} ifelse >> /PUT pdfmark [{icc_PDFX} ICCProfile (r) file /PUT pdfmark } if % Define the output intent dictionary : [/_objdef {OutputIntent_PDFX} /type /dict /OBJ pdfmark [{OutputIntent_PDFX} << /Type /OutputIntent % Must be so (the standard requires). /S /GTS_PDFX % Must be so (the standard requires). /OutputCondition (Japan Color 2002 for Newspaper Printing) % Customize /Info (none) % Customize /OutputConditionIdentifier (JCN2002) % Customize /RegistryName ( % Must be so (the standard requires). currentdict /ICCProfile known { /DestOutputProfile {icc_PDFX} % Must be so (see above). } if >> /PUT pdfmark [{Catalog} <</OutputIntents [ {OutputIntent_PDFX} ]>> /PUT pdfmark

CMYK Images

Because Quartz maps so closely to the PDF format, it won’t do any conversion of your images at render time. If you have RGB images and CMYK text you’ll end up with a mixed PDF.

Converting an NSImage from RGB to CMYK is easy though:

NSColorSpace *targetColorSpace = [NSColorSpace genericCMYKColorSpace]; NSBitmapImageRep *targetImageRep; if ([sourceImageRep colorSpace] == targetColorSpace) { targetImageRep = sourceImageRep; } else { targetImageRep = [sourceImageRep bitmapImageRepByConvertingToColorSpace:targetColorSpace renderingIntent:NSColorRenderingIntentPerceptual]; } NSData *targetImageData = [targetImageRep representationUsingType:NSJPEGFileType properties:nil]; NSImage *targetImage = [[NSImage alloc] initWithData:targetImageData];

Multi-Core Performance

Normally in Cocoa, all operations that affect UI should happen on main thread. However, we have some exceptional circumstances which means we can parallelise some of the slower bits of our code if we want to, for performance.

Firstly, our NSViews stand alone, they don’t appear on screen, they’re not part of an NSWindow, and they’re not going to be affected by any other part of the operating system. This means we don’t need to specifically use main thread for our UI operations – nothing else will be touching them.

The method that actually performs the rasterisation of an NSView is thread-safe, assuming the NSGraphicsContext is owned by the thread, but there are often shared objects behind the scenes, such as Core Text controllers. You can either take private copies of these (which seems like an opportunity to introduce some nasty and complex bugs), or you can single-thread the rasterisation process, but multi-thread everything either side of it, such as the loading and conversion of any resources beforehand and the JPEG conversion afterwards.

We use a per-document serial dispatch queue, and put the rasterisation through that, which still gives us multi-core image conversion (the slowest portion of the code).

– (NSArray *)jpegsPreviewsForWidth:(float)width quality:(float)quality { NSUInteger pageCount = document.pages.count; NSMutableArray *pagePreviewsArray = [NSMutableArray arrayWithCapacity:pageCount]; // Set all the elements to null, so we can replace them later. // TODO: I’m not sure if this is necessary. for (int i = 0; i < pageCount; i++) { [pagePreviewsArray addObject:[NSNull null]]; } dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_queue_t serialQueue; serialQueue = dispatch_queue_create(“com.newspaperclub.PreviewsSynchronizationQueue”, NULL); dispatch_apply(pageCount, queue, ^(size_t pageIndex) { NSData *jpegData = [self jpegForPageIndex:pageIndex width:width quality:quality]; // Synchronize access to pagePreviewsArray through a serial dispatch queue dispatch_sync(serialQueue, ^{ [pagePreviewsArray replaceObjectAtIndex:pageIndex withObject:jpegData]; }); }); return pagePreviewsArray; } – (NSData *)jpegForPageIndex:(NSInteger)pageIndex width:(float)width quality:(float)quality { // Perform rasterization on render queue __block NSImage *image; dispatch_sync(renderDispatchQueue, ^{ image = [self imageForPageIndex:pageIndex width:width]; }); NSBitmapImageRep *imageRep = [[image representations] objectAtIndex:0]; NSNumber *qualityNumber = [NSNumber numberWithFloat:quality]; NSDictionary *properties = @{ NSImageCompressionFactor: qualityNumber }; NSData *bitmapData = [imageRep representationUsingType:NSJPEGFileType properties:properties]; return bitmapData; }

The End

Plumping for Quartz + Cocoa to do something like invoice generation is likely to be overkill – you’re probably better off with a higher level PDF library. But if you need to have very fine control over a document, to have quality typography, and to render it with near real-time performance, it’s a great bet and we’re very happy with it.