The Cambridge Public Art Project by James Coupe

Just another text to FCP sequence custom application. It generates normal or split screen sequences just based upon a simple formated text file.

The project is a public art installation for Cambridge (UK) at the end of April 07 by James Coupe.
It involves distributing 10 cameras at fixed positions around the city, each running basic motion detection and recording everything they see. Once a day, harddrives are removed from the cameras and taken to another location, where the footage is run through a linux cluster that evaluates each frame against a series of 'behavior templates', looking for specific activities that may have occurred in the city that day. When a behavior correlation is discovered, its timecode is databased, resulting by the end of the evaluation in a series of timecodes corresponding to clips of video. These clips are then put back together in a specific narrative sequence to build a movie that tells a story.
The story also includes 'text headlines' which are generated by the above system as well.

In conjuction with the developers of the above system I developed a simple scheme for the events to be stored in a text file - each event has info about the kind (text or movie), an eventual panel position within an eventual split screen, fades, file location, start TC of file, event start, event duration.

This text is processed by the small app I developed to build a FCP sequence XML. It somehow works like as you drop a selection of IO edited clips from the browser into a timeline. The big difference is that the app distributes the clips to different tracks, controlls the timing relation of the tracks and applies motion settings (scale, position and distort) according to the clips "split screen identifier".
The sequence IOs are controlled by the "master" V1 track and the actual "split panel", this way no overlapping video content will ocurre at the wrong place.

To make it clearer below some screen shots.



The first three events don't have a "panel identifier" and will be placed into V1. The following 13 do have and will be distributed to four tracks, scaled and positioned. The following "t" event at line 17 doesn't have either a "panel identifier" nor a file path or a duration - this will tell the app to start something new. This entry is redundant since the required action would be done anyway with the above source text since the amount of needed panels changes with the next event and therefore all timeline "Starts" will be recalculated.
The "t" entry at line 36 looks similar to the one at line 17 but it contains a duration greater than 0 - this will cause a "space" in the timeline.
(Note: The above text file entries are just dummies and don't make too much of a sense.)

Below a screen shot of the resulting sequence in FCP. The current implementation does handle up to 16 panels for a split screen.




Even a massive amount of entries is handled very fast. The below 1300 clips where processed within 30 seconds. This includes applying filters for scale, position and the correct aspect ratio (the source clips are 640 x 480 NTSC, the sequence is DV50 PAL).

System Requirements :-)
  • Apple PowerMac G4 (faster = better)
  • 256 MB RAM (more=better)
  • MacOS X 10.4 or higher
  • A Linux Cluster Setup
  • Some custom software
  • FCP 4.0 or higher
For further information contact:

Andreas Kiel
Spherico
Nelkenstr. 25, D-76135 Karlsruhe
eMail: kiel@spherico.com,