Monthly Archives: August 2016

The Rise of the Diegetic Intertitle

Prior to the integration of sound, movies often displayed text-based information using title cards or intertitles. This form of communication is known as diegetic content, as the actors cannot see it.

However, even after the invention of the talkie, other types of information needed to be displayed — such as translations for a foreign audience. This was usually done in a very perfunctory way, with non-descriptive text (typically achieved using a font such as Times New Roman, with a black outline to contrast with any background) on the lower third of the screen. This text is external to the story, so it seemed natural that it should be stylistically different.

In more recent years, there are increased demands on visual storytelling – small screen devices (e.g. mobile) and computers have started to become part of the language of cinema (and, by extension – the video and digital screens on which they are ‘projected’). Additionally, in today’s more multicultural world, the requirement to show multiple languages within the same film means that different typographic techniques can be used to enhance this aspect of the story. In fact, this is an extension of traditional subtitles – where often sound effects and untranslated languages are still included.

As mobile and internet technology started to appear on screen, an editor would typically cut to a shot of the device — allowing the viewer to read the display. As post-production technology improved, TV’s requirement for an increased speed of plot exposition, and product placement costs (and legal clearances) required a more generic approach, this eventually evolved to show the interface directly incorporated onto the visual frame.

Subtitles, captions and interface design typically sits independently on top of the content as a layer added in post-production – i.e. as a semi-transparent wall between the story and the viewer. Integrating these titles to make them appear part of the content can be quite a technical challenge — especially when they need to be tracked to a moving camera.

This overlaying technique was demonstrated in movies such as Man On Fire (2004), Stranger Than Fiction (2006), Disconnect (2012), and 2014’s The Fault In Our Stars, John Wick, and Non-Stop, as well as TV shows such as (perhaps most influentially) Sherlock (2010) and House of Cards in 2013.

There are two main types of elements in modern cinema: diegetic – anything that the characters would recognise happening within their world (of the narrative story) and non-diegetic – anything that happens outside the story (for example, this would usually be opening credit sequences).

However, (much like modern media itself) on-screen typography has surpassed merely being integrated visually into the background plate. It is now becoming increasingly self-reflexive, and blurs these diegetic lines. This is often referred to as “breaking the fourth wall”, and is perhaps best demonstrated in the opening titles to the 2016 film Deadpool, where even the actual names of producers are subverted into narrative elements.

For more exegesis of the diegesis (sorry, I couldn’t help it), see Tim Carmody’s excellent 2011 SVA Interaction Design presentation “The Dictatorial Perpendicular: Walter Benjamin’s Reading Revolution”.

“A fourth wall break inside a fourth wall break? That’s like, sixteen walls.” – Deadpool

Mind The Gap - Johnston100

Johnston100 by Monotype

Edward Johnston created the font used by London Transport over 100 years ago. Since then, needs have changed – so Monotype were commissioned to redraw the entire set of glyphs, as well as creating new weights such as thin and hairline.

via and

The Amazon Dash Button

Amazon’s branded Dash Buttons were introduced in March 2015, allowing products to be easily re-ordered with a single click of the battery-powered device – not to be confused with the unbranded UK AmazonFresh version (which works like a miniature version of the popular hands-free Amazon Echo).

As an inexpensive (US$4.99) wifi-enabled IoT device, in less than 3 months they were starting to be re-purposed. There are a handful of approaches, from fairly non-technical ARP probe detection through to bare-metal reprogramming. Amazon themselves are also reaching out to developers and smaller brands with their Dash Replenishment Service.

Getting started seems pretty simple – when you get a Dash button, Amazon gives you a list of setup instructions to get going. Just follow their list of instructions, but don’t complete the final step . Do not select a product, and just exit the app.

Most techniques use something like IFTT to connect the button event to a IoT trigger of your choosing. Instructables has a great step-by-step tutorial, and there’s some great open-source code available on GitHub.

Amazon Dash Button (Tide) on washing machine
The Dash Button as it it usually used - to order more Amazon products (such as washing powder).

The detailed specs:

  • The CPU is a STM32F205RG6 processor which is an ARM Cortex-M3 that can run up to 120mhz and has 128 kilobytes of RAM and 1 megabyte of flash memory for program storage
  • The WiFi module is a BCM943362 module which in combination with the CPU make it a platform for Broadcom’s WICED SDK
  • There’s a 16 megabit SPI flash ROM which is typically used in conjunction with the WICED SDK for storing application data
  • An ADMP441 microphone is connected to the CPU and used by the Dash iOS application to configure the device using the speaker on a phone/tablet
  • There’s a single RGB LED and a button

Quite powerful for US$5.

However, the next step in this evolution has just been released – the AWS IoT Button.

The AWS IoT Button is a programmable button based on the Amazon Dash Button hardware. This simple Wi-Fi device is easy to configure and designed for developers to get started with AWS IoT, AWS Lambda, Amazon DynamoDB, Amazon SNS, and many other Amazon Web Services without writing device-specific code.

Targeted at developers, this US$20 version connects to the web using the Amazon Web Services Lambda platform without writing a line of code (ok, so not developers then). However, even the “Hello World” example described here seems quite technical – in some ways, even more so than hacking the original (and at four times the cost). It seems to have three types of button pushes, though – short, long and double for more interactions.

AWS IoT enables Internet-connected things to connect to the AWS cloud and lets applications in the cloud interact with Internet-connected things. Common IoT applications either collect and process telemetry from devices or enable users to control a device remotely.

Masters of Videomontage

Some of the most fascinating video animators I’ve ever seen – Cyriak (Brighton, UK), Fernando Livschitz (Buenos Aires, Argentina) and Till Nowak (Hamburg, Germany). Using found footage and masks, they create a surreal and often disturbing view of reality.

As mentioned in the ‘Heroes of Animation film’, Cyriak sees this style as a natural evolution of the Terry Gilliam school – taking photographic elements and moving them in unexpected ways. I would go further and in that it takes Russian Constructivist fine-art photomontage to a natural conclusion.

We are so used to amateur camcorder and mobile films these days that this approach seems to transcend animation – and we are drawn into their world.

We are so used to amateur camcorder and mobile video these days that this approach seems to even transcend animation – and we are drawn into their world. So much so that The Institute for Centrifugal Research seems to be (remotely) plausible.

And here’s how it’s done.

This profile of Cyriak includes a history of his work, and a demonstration of his process. This behind-the-scenes video from The Centrifuge Brain Project shows the CGI overlaid over the source footage, and this After Effects tutorial explains the basics, using a locked-off camera (then you can add natural camera movement afterwards).