Monthly Archives: August 2016

The Rise of the Diegetic Intertitle

Pri­or to the inte­gra­tion of sound, movies often dis­played text-based infor­ma­tion using title cards or inter­ti­tles. This form of com­mu­ni­ca­tion is known as diegetic con­tent, as the actors can­not see it.

How­ev­er, even after the inven­tion of the talkie, oth­er types of infor­ma­tion need­ed to be dis­played — such as trans­la­tions for a for­eign audi­ence. This was usu­al­ly done in a very per­func­to­ry way, with non-descrip­tive text (typ­i­cal­ly achieved using a font such as Times New Roman, with a black out­line to con­trast with any back­ground) on the low­er third of the screen. This text is exter­nal to the sto­ry, so it seemed nat­ur­al that it should be styl­is­ti­cal­ly dif­fer­ent.

In more recent years, there are increased demands on visu­al sto­ry­telling — small screen devices (e.g. mobile) and com­put­ers have start­ed to become part of the lan­guage of cin­e­ma (and, by exten­sion — the video and dig­i­tal screens on which they are ‘pro­ject­ed’). Addi­tion­al­ly, in today’s more mul­ti­cul­tur­al world, the require­ment to show mul­ti­ple lan­guages with­in the same film means that dif­fer­ent typo­graph­ic tech­niques can be used to enhance this aspect of the sto­ry. In fact, this is an exten­sion of tra­di­tion­al sub­ti­tles — where often sound effects and untrans­lat­ed lan­guages are still includ­ed.

As mobile and inter­net tech­nol­o­gy start­ed to appear on screen, an edi­tor would typ­i­cal­ly cut to a shot of the device — allow­ing the view­er to read the dis­play. As post-pro­duc­tion tech­nol­o­gy improved, TV’s require­ment for an increased speed of plot expo­si­tion, and prod­uct place­ment costs (and legal clear­ances) required a more gener­ic approach, this even­tu­al­ly evolved to show the inter­face direct­ly incor­po­rat­ed onto the visu­al frame.

Sub­ti­tles, cap­tions and inter­face design typ­i­cal­ly sits inde­pen­dent­ly on top of the con­tent as a lay­er added in post-pro­duc­tion — i.e. as a semi-trans­par­ent wall between the sto­ry and the view­er. Inte­grat­ing these titles to make them appear part of the con­tent can be quite a tech­ni­cal chal­lenge — espe­cial­ly when they need to be tracked to a mov­ing cam­era.

This over­lay­ing tech­nique was demon­strat­ed in movies such as Man On Fire (2004), Stranger Than Fic­tion (2006), Dis­con­nect (2012), and 2014’s The Fault In Our Stars, John Wick, and Non-Stop, as well as TV shows such as (per­haps most influ­en­tial­ly) Sher­lock (2010) and House of Cards in 2013.

There are two main types of ele­ments in mod­ern cin­e­ma: diegetic — any­thing that the char­ac­ters would recog­nise hap­pen­ing with­in their world (of the nar­ra­tive sto­ry) and non-diegetic — any­thing that hap­pens out­side the sto­ry (for exam­ple, this would usu­al­ly be open­ing cred­it sequences).

How­ev­er, (much like mod­ern media itself) on-screen typog­ra­phy has sur­passed mere­ly being inte­grat­ed visu­al­ly into the back­ground plate. It is now becom­ing increas­ing­ly self-reflex­ive, and blurs these diegetic lines. This is often referred to as “break­ing the fourth wall”, and is per­haps best demon­strat­ed in the open­ing titles to the 2016 film Dead­pool, where even the actu­al names of pro­duc­ers are sub­vert­ed into nar­ra­tive ele­ments.

For more exe­ge­sis of the die­ge­sis (sor­ry, I couldn’t help it), see Tim Carmody’s excel­lent 2011 SVA Inter­ac­tion Design pre­sen­ta­tion “The Dic­ta­to­r­i­al Per­pen­dic­u­lar: Wal­ter Benjamin’s Read­ing Rev­o­lu­tion”.

“A fourth wall break inside a fourth wall break? That’s like, six­teen walls.” — Dead­pool

Mind The Gap - Johnston100

Johnston100 by Monotype

Edward John­ston cre­at­ed the font used by Lon­don Trans­port over 100 years ago. Since then, needs have changed — so Mono­type were com­mis­sioned to redraw the entire set of glyphs, as well as cre­at­ing new weights such as thin and hair­line.

via monotype.com and itsnicethat.com

The Amazon Dash Button

Amazon’s brand­ed Dash But­tons were intro­duced in March 2015, allow­ing prod­ucts to be eas­i­ly re-ordered with a sin­gle click of the bat­tery-pow­ered device — not to be con­fused with the unbrand­ed UK Ama­zon­Fresh ver­sion (which works like a minia­ture ver­sion of the pop­u­lar hands-free Ama­zon Echo).

As an inex­pen­sive (US$4.99) wifi-enabled IoT device, in less than 3 months they were start­ing to be re-pur­posed. There are a hand­ful of approach­es, from fair­ly non-tech­ni­cal ARP probe detec­tion through to bare-met­al repro­gram­ming. Ama­zon them­selves are also reach­ing out to devel­op­ers and small­er brands with their Dash Replen­ish­ment Ser­vice.

Get­ting start­ed seems pret­ty sim­ple — when you get a Dash but­ton, Ama­zon gives you a list of set­up instruc­tions to get going. Just fol­low their list of instruc­tions, but don’t com­plete the final step . Do not select a prod­uct, and just exit the app.

Most tech­niques use some­thing like IFTT to con­nect the but­ton event to a IoT trig­ger of your choos­ing. Instructa­bles has a great step-by-step tuto­r­i­al, and there’s some great open-source code avail­able on GitHub.

Amazon Dash Button (Tide) on washing machine
The Dash But­ton as it it usu­al­ly used — to order more Ama­zon prod­ucts (such as wash­ing pow­der).

The detailed specs:

  • The CPU is a STM32F205RG6 proces­sor which is an ARM Cor­tex-M3 that can run up to 120mhz and has 128 kilo­bytes of RAM and 1 megabyte of flash mem­o­ry for pro­gram stor­age
  • The WiFi mod­ule is a BCM943362 mod­ule which in com­bi­na­tion with the CPU make it a plat­form for Broadcom’s WICED SDK
  • There’s a 16 megabit SPI flash ROM which is typ­i­cal­ly used in con­junc­tion with the WICED SDK for stor­ing appli­ca­tion data
  • An ADMP441 micro­phone is con­nect­ed to the CPU and used by the Dash iOS appli­ca­tion to con­fig­ure the device using the speak­er on a phone/tablet
  • There’s a sin­gle RGB LED and a but­ton

Quite pow­er­ful for US$5.

How­ev­er, the next step in this evo­lu­tion has just been released — the AWS IoT But­ton.

The AWS IoT But­ton is a pro­gram­ma­ble but­ton based on the Ama­zon Dash But­ton hard­ware. This sim­ple Wi-Fi device is easy to con­fig­ure and designed for devel­op­ers to get start­ed with AWS IoT, AWS Lamb­da, Ama­zon DynamoDB, Ama­zon SNS, and many oth­er Ama­zon Web Ser­vices with­out writ­ing device-spe­cif­ic code.

Tar­get­ed at devel­op­ers, this US$20 ver­sion con­nects to the web using the Ama­zon Web Ser­vices Lamb­da plat­form with­out writ­ing a line of code (ok, so not devel­op­ers then). How­ev­er, even the “Hel­lo World” exam­ple described here seems quite tech­ni­cal — in some ways, even more so than hack­ing the orig­i­nal (and at four times the cost). It seems to have three types of but­ton push­es, though — short, long and dou­ble for more inter­ac­tions.

AWS IoT enables Inter­net-con­nect­ed things to con­nect to the AWS cloud and lets appli­ca­tions in the cloud inter­act with Inter­net-con­nect­ed things. Com­mon IoT appli­ca­tions either col­lect and process teleme­try from devices or enable users to con­trol a device remote­ly.

Masters of Videomontage

Some of the most fas­ci­nat­ing video ani­ma­tors I’ve ever seen — Cyr­i­ak (Brighton, UK), Fer­nan­do Livschitz (Buenos Aires, Argenti­na) and Till Nowak (Ham­burg, Ger­many). Using found footage and masks, they cre­ate a sur­re­al and often dis­turb­ing view of real­i­ty.

As men­tioned in the ‘Heroes of Ani­ma­tion film’, Cyr­i­ak sees this style as a nat­ur­al evo­lu­tion of the Ter­ry Gilliam school — tak­ing pho­to­graph­ic ele­ments and mov­ing them in unex­pect­ed ways. I would go fur­ther and in that it takes Russ­ian Con­struc­tivist fine-art pho­tomon­tage to a nat­ur­al con­clu­sion.

We are so used to ama­teur cam­corder and mobile films these days that this approach seems to tran­scend ani­ma­tion — and we are drawn into their world.

We are so used to ama­teur cam­corder and mobile video these days that this approach seems to even tran­scend ani­ma­tion — and we are drawn into their world. So much so that The Insti­tute for Cen­trifu­gal Research seems to be (remote­ly) plau­si­ble.

And here’s how it’s done.

This pro­file of Cyr­i­ak includes a his­to­ry of his work, and a demon­stra­tion of his process. This behind-the-scenes video from The Cen­trifuge Brain Project shows the CGI over­laid over the source footage, and this After Effects tuto­r­i­al explains the basics, using a locked-off cam­era (then you can add nat­ur­al cam­era move­ment after­wards).