Category Archives: Video

"Eclipse" music video

What does the future hold for storytelling — can a machine create cinema?

The release of the trail­er for Mor­gan pro­vides fur­ther insight that Arti­fi­cial Intel­li­gence (A.I.) is creep­ing slow­ly onto the cre­ative stage. After ini­tial progress was made in 1996 when IBM’s Deep Blue beat World Chess Cham­pi­on Gary Kas­parov, IBM’s Wat­son then beat human cham­pi­ons in Jeop­ardy, and more recent­ly Google’s Deep­Mind con­quered the ancient Chi­nese game of Go. Google Pho­tos gen­er­ates videos (with music) auto­mat­i­cal­ly from images on my phone. It’s becom­ing obvi­ous that deep strate­gic think­ing is at least pos­si­ble using machines.

So, can a machine cre­ate a video nar­ra­tive? Could we tell the dif­fer­ence?

The unfor­tu­nate fact is, of course, that the Mor­gan trail­er is hol­low and poor­ly-paced (even with the help of an “IBM film­mak­er”), and the musi­cians behind the AI-direct­ed “Eclipse” music video have dis­tanced them­selves from the end prod­uct.

Look­ing deep­er into each of these projects, they still required a human hand to direct/collate/guide the machine — it’s a ground-up approach to AI, so there’s no “Wat­son Video Edit­ing Soft­ware” on the mar­ket.

How­ev­er, the build­ing blocks have already been cre­at­ed — the Google search engine uses nat­ur­al-lan­guage pro­cess­ing, and Wolf­gram Alpha accepts com­mands in basic Eng­lish. We now have (pret­ty good) auto­mat­ic web sum­maries and head­line analy­sers. There’s a rea­son why Google’s Prin­ci­pal Film­mak­er Jes­si­ca Brill­hart thinks Zork’s lan­guage pro­cess­ing will heav­i­ly influ­ence the future of VR.

It seems that although we prob­a­bly have a while to go before cre­ativ­i­ty is real­is­ti­cal­ly threat­ened in any way, most peo­ple won’t care if some­thing has been cre­at­ed by com­put­er. For exam­ple, much of the in-house pro­mo­tions we cur­rent­ly see on TV chan­nels are pack­aged in a way that would­n’t require human inter­ven­tion — so per­haps it might not be that long after all (for spe­cif­ic sit­u­a­tions).

So a machine can assem­ble a video (I even hes­i­tate to use the words ‘edit’ or ‘direct’). But not very well — at least not yet. How­ev­er, as Lin­guis­tics expert Noam Chom­sky said, per­haps we are even ask­ing the wrong ques­tion:

“Think­ing is a human fea­ture. Will AI some­day real­ly think? That’s like ask­ing if sub­marines swim. If you call it swim­ming then robots will think, yes.”

via and

The Rise of the Diegetic Intertitle

Pri­or to the inte­gra­tion of sound, movies often dis­played text-based infor­ma­tion using title cards or inter­ti­tles. This form of com­mu­ni­ca­tion is known as diegetic con­tent, as the actors can­not see it.

How­ev­er, even after the inven­tion of the talkie, oth­er types of infor­ma­tion need­ed to be dis­played — such as trans­la­tions for a for­eign audi­ence. This was usu­al­ly done in a very per­func­to­ry way, with non-descrip­tive text (typ­i­cal­ly achieved using a font such as Times New Roman, with a black out­line to con­trast with any back­ground) on the low­er third of the screen. This text is exter­nal to the sto­ry, so it seemed nat­ur­al that it should be styl­is­ti­cal­ly dif­fer­ent.

In more recent years, there are increased demands on visu­al sto­ry­telling — small screen devices (e.g. mobile) and com­put­ers have start­ed to become part of the lan­guage of cin­e­ma (and, by exten­sion — the video and dig­i­tal screens on which they are ‘pro­ject­ed’). Addi­tion­al­ly, in today’s more mul­ti­cul­tur­al world, the require­ment to show mul­ti­ple lan­guages with­in the same film means that dif­fer­ent typo­graph­ic tech­niques can be used to enhance this aspect of the sto­ry. In fact, this is an exten­sion of tra­di­tion­al sub­ti­tles — where often sound effects and untrans­lat­ed lan­guages are still includ­ed.

As mobile and inter­net tech­nol­o­gy start­ed to appear on screen, an edi­tor would typ­i­cal­ly cut to a shot of the device — allow­ing the view­er to read the dis­play. As post-pro­duc­tion tech­nol­o­gy improved, TV’s require­ment for an increased speed of plot expo­si­tion, and prod­uct place­ment costs (and legal clear­ances) required a more gener­ic approach, this even­tu­al­ly evolved to show the inter­face direct­ly incor­po­rat­ed onto the visu­al frame.

Sub­ti­tles, cap­tions and inter­face design typ­i­cal­ly sits inde­pen­dent­ly on top of the con­tent as a lay­er added in post-pro­duc­tion — i.e. as a semi-trans­par­ent wall between the sto­ry and the view­er. Inte­grat­ing these titles to make them appear part of the con­tent can be quite a tech­ni­cal chal­lenge — espe­cial­ly when they need to be tracked to a mov­ing cam­era.

This over­lay­ing tech­nique was demon­strat­ed in movies such as Man On Fire (2004), Stranger Than Fic­tion (2006), Dis­con­nect (2012), and 2014’s The Fault In Our Stars, John Wick, and Non-Stop, as well as TV shows such as (per­haps most influ­en­tial­ly) Sher­lock (2010) and House of Cards in 2013.

There are two main types of ele­ments in mod­ern cin­e­ma: diegetic — any­thing that the char­ac­ters would recog­nise hap­pen­ing with­in their world (of the nar­ra­tive sto­ry) and non-diegetic — any­thing that hap­pens out­side the sto­ry (for exam­ple, this would usu­al­ly be open­ing cred­it sequences).

How­ev­er, (much like mod­ern media itself) on-screen typog­ra­phy has sur­passed mere­ly being inte­grat­ed visu­al­ly into the back­ground plate. It is now becom­ing increas­ing­ly self-reflex­ive, and blurs these diegetic lines. This is often referred to as “break­ing the fourth wall”, and is per­haps best demon­strat­ed in the open­ing titles to the 2016 film Dead­pool, where even the actu­al names of pro­duc­ers are sub­vert­ed into nar­ra­tive ele­ments.

For more exe­ge­sis of the die­ge­sis (sor­ry, I could­n’t help it), see Tim Car­mody’s excel­lent 2011 SVA Inter­ac­tion Design pre­sen­ta­tion “The Dic­ta­to­r­i­al Per­pen­dic­u­lar: Wal­ter Benjamin’s Read­ing Rev­o­lu­tion”.

“A fourth wall break inside a fourth wall break? That’s like, six­teen walls.” — Dead­pool

Masters of Videomontage

Some of the most fas­ci­nat­ing video ani­ma­tors I’ve ever seen — Cyr­i­ak (Brighton, UK), Fer­nan­do Livschitz (Buenos Aires, Argenti­na) and Till Nowak (Ham­burg, Ger­many). Using found footage and masks, they cre­ate a sur­re­al and often dis­turb­ing view of real­i­ty.

As men­tioned in the ‘Heroes of Ani­ma­tion film’, Cyr­i­ak sees this style as a nat­ur­al evo­lu­tion of the Ter­ry Gilliam school — tak­ing pho­to­graph­ic ele­ments and mov­ing them in unex­pect­ed ways. I would go fur­ther and in that it takes Russ­ian Con­struc­tivist fine-art pho­tomon­tage to a nat­ur­al con­clu­sion.

We are so used to ama­teur cam­corder and mobile films these days that this approach seems to tran­scend ani­ma­tion — and we are drawn into their world.

We are so used to ama­teur cam­corder and mobile video these days that this approach seems to even tran­scend ani­ma­tion — and we are drawn into their world. So much so that The Insti­tute for Cen­trifu­gal Research seems to be (remote­ly) plau­si­ble.

And here’s how it’s done.

This pro­file of Cyr­i­ak includes a his­to­ry of his work, and a demon­stra­tion of his process. This behind-the-scenes video from The Cen­trifuge Brain Project shows the CGI over­laid over the source footage, and this After Effects tuto­r­i­al explains the basics, using a locked-off cam­era (then you can add nat­ur­al cam­era move­ment after­wards).

The Others - Hiroshi Kondo

Hiroshi Kondo

Hiroshi Kon­do cap­tures the the ener­gy and the lone­li­ness of liv­ing in such a vast metrop­o­lis in his exper­i­men­tal short, The Oth­ers. The slit-scan­ning film bends time and place into a mov­ing por­trait of a Tokyo square by high­light­ing the indi­vid­ual and the crowd mov­ing both sep­a­rate­ly and in haunt­ing uni­son. The over­all prod­uct is some­thing between glitch art and aug­ment­ed real­i­ty.

You can see more of his tal­ent­ed work at his web­site below.


Bob Dylan — Like A Rolling Stone

This inter­ac­tive video (from 2013) is the song’s first offi­cial video. It allows view­ers to use their key­boards or cur­sors to flip through 16 chan­nels that mim­ic TV for­mats such as games shows, shop­ping net­works and real­i­ty series. Peo­ple on each chan­nel, no mat­ter what TV trope they rep­re­sent, are seen lip-sync­ing the lyrics.

“I’m using the medi­um of tele­vi­sion to look back right at us,” direc­tor Vania Hey­mann told Mash­able. “You’re flip­ping your­self to death with switch­ing chan­nels [in real life].” Adds Inter­lude CEO Yoni Bloch: “You’ll always miss some­thing because you can’t watch every­thing at the same time.”

The sta­tions you can flip through include a cook­ing show, The Price Is Right, Pawn Stars, local news, a ten­nis match, a chil­dren’s car­toon, BBC News and a live video of Dylan and the Hawks play­ing “Like a Rolling Stone” in 1966.

via and

Eric Prydz - Hologram

Eric Prydz — EPIC 4.0 Tour Visuals

Com­ing off the very suc­cess­ful cam­paign for Eric Pry­dz’s Gen­er­ate music video, our friend Michael Ser­shall hired the team back to design the visu­als for his EPIC 4.0 tour. The set­up for the live show was fair­ly insane, with con­tent screen form­ing a cube with a 28mm see-through LED in front, a Holo Gauze through the mid­dle for a mes­mer­iz­ing holo­gram pro­jec­tion, and final­ly a 12mm 4:1 wide-screen LED in the back enclos­ing the cube and play­ing back the key con­tent.

For the gig, Munkowitz tapped his favorite col­lab­o­ra­tors, the great Conor Grebel and Michael Rigley, both ridicu­lous­ly tal­ent­ed Cinema4D Artists and Ani­ma­tors, who brought their A‑Game for this throw­down. All the con­tent was ren­dered with the amaz­ing Octane Ren­der­er which meant the team bought two super­Com­put­ers and a fuck­Load of graph­ics cards to ren­der all the wet­ness. In the end, the project was about mak­ing art for enter­tain­ment, and these kinds of pay­ing gigs are what we love.

via and

Werner Herzog Talks Virtual Reality

“I am con­vinced that this is not going to be an exten­sion of cin­e­ma or 3‑D cin­e­ma or video games. It is some­thing new, dif­fer­ent, and not expe­ri­enced yet,” the film­mak­er Wern­er Her­zog said of vir­tu­al real­i­ty. An inter­view by Patrick House with the film­mak­er about sim­u­la­tion and expe­ri­ence.


Wanderers — a short film by Erik Wernquist

The film is a vision of our human­i­ty’s future expan­sion into the Solar Sys­tem. Although admit­ted­ly spec­u­la­tive, the visu­als in the film are all based on sci­en­tif­ic ideas and con­cepts of what our future in space might look like, if it ever hap­pens. All the loca­tions depict­ed in the film are dig­i­tal recre­ations of actu­al places in the Solar Sys­tem, built from real pho­tos and map data where avail­able.


Wireless DMX Lighting Control Using Arduino and Vixen

A step-by-step tuto­r­i­al on how to con­trol and sequence wire­less light­ing effects — either for instal­la­tions, dis­plays, or wear­able designs. It’s based on the Arduino board (or Freak­labs’ Fred­board) using Vix­en soft­ware (v3). Every­thing you need — from scratch right through to code and work­ing exam­ples.


A 3D Fractal Artist Is Building an ‘Interstellar’ Inspired VR World

Film­mak­er Julius Horsthuis, the fre­quent explor­er of frac­tal­ized cav­erns and end­less alien plan­ets, has begun a line of com­put­er-gen­er­at­ed exper­i­ments that could let us explore our own Inter­stel­lar-like mul­ti­di­men­sion­al real­i­ties. His impres­sive series of sweep­ing frac­tal vis­tas, begin­ning with Geiger’s Night­mare near­ly a year ago, has giv­en him a wealth of knowl­edge about mak­ing gor­geous frac­tals. Now, he has chan­neled that expe­ri­ence into build­ing Hall­way 360VR, the first in a line of 360-degree vir­tu­al real­i­ty ani­ma­tions.


Lightpainting with Pixelsticks

Pix­el­stick con­sists of 200 full col­or RGB LEDs inside a light­weight alu­minum hous­ing. The mount­ed con­troller reads images from an SD card and dis­plays them, one ver­ti­cal line at a time, on the LEDs. Each LED cor­re­sponds to a pix­el in the image.


Cymatics — Nigel Stanford

Cymat­ics is the first sin­gle from Nigel Stan­ford’s new album Solar Echoes. It was shot in 6k res­o­lu­tion on Red Drag­on cam­eras, and fin­ished in 4k / Ultra HD.

The team went through months of research, test­ing, and devel­op­ment to make sure the exper­i­ments — includ­ing a Chla­di­ni plate, speak­er dish, hose pipe, fer­ro flu­id, Ruben’s tube, and tes­la coil looked great in the final film.

Cymat­ics is the study of vis­i­ble sound co vibra­tion, a sub­set of modal phe­nom­e­na. Typ­i­cal­ly the sur­face of a plate, diaphragm, or mem­brane is vibrat­ed, and regions of max­i­mum and min­i­mum dis­place­ment are made vis­i­ble in a thin coat­ing of par­ti­cles, paste, or liq­uid. Dif­fer­ent pat­terns emerge in the exci­ta­to­ry medi­um depend­ing on the geom­e­try of the plate and the dri­ving fre­quen­cy.


3D Projection (without a screen)

A team of researchers in Japan lead by Aki­ra Asanohave devel­oped the tech­nol­o­gy they call ‘The Aer­i­al Bur­ton.’ The device works by fir­ing a 1kHz infrared pulse direct­ly into a 3D scan­ner, which focus­es and reflects the laser to a spe­cif­ic point in the air. When the mol­e­cules reach the spec­i­fied point at the end of the laser they ion­ize, releas­ing ener­gy in the form of pho­tons.


Why good leaders make you feel safe

What makes a great leader? Man­age­ment the­o­rist Simon Sinek sug­gests, it’s some­one who makes their employ­ees feel secure, who draws staffers into a cir­cle of trust. But cre­at­ing trust and safe­ty — espe­cial­ly in an uneven econ­o­my — means tak­ing on big respon­si­bil­i­ty.


Isaac Delusion’s “Pandora’s Box”


Cre­at­ed by Stu­dio Clée – com­prised of Alizée Ayrault and Claire Dubosc—with addi­tion­al con­tri­bu­tions from Romain Avalle, the project includes 600 videos and 1500 pos­si­ble slots, yield­ing a seem­ing­ly-infi­nite num­ber of com­bi­na­tions. After 20 sec­onds, you get the option to share the unique video with friends, where they can either watch your per­son­al­ized video, or expe­ri­ence a fresh, rein­car­nat­ed one. All get adorned with eye-catch­ing vin­tage stock footage—cuts from North By North West and oth­er 20th cen­tu­ry gems—taken from the Prelinger Archives, a pub­lic domain data­base cre­at­ed in 1983.

View the project here


Brain decoding: Reading minds

Neu­ro­sci­en­tists are start­ing to deci­pher what a per­son is see­ing, remem­ber­ing and even dream­ing just by look­ing at their brain activ­i­ty. They call it brain decod­ing.

via and