The Digital Cowboy 

Network  

From the beginning, dcinema has been very network centric. All the brands of players and projectors had ethernet ports for remote access, software and firmware upgrades and even local control. Prior to DCI's efforts, the servers were very proprietary, with no interoperability between them. That said, the 100 mbps Ethernet connection (later 1,000 mbps) was a convenient way to move content between like machines as well as control them. However, the Ethernet implementations were quite different than the classic router-networked PC-local loop data network. Command line or proprietary user interface software was common at the beginning of this decade. Network specialists familiar with corporate networking found the digital cinema solutions difficult, even archaic to work with.

Generally, each cinema server manufacturer used a form of ftp to upload or download content to and from their machines. However, the ftp implementations were each unique and for good reason. Since the content in any brand of server had been encoded using proprietary methods, reverse engineering of a company's trade secrets could be possible if content could be simply and easily copied from a machine. Each manufacturer developed ftp-based software which "converted" the content during the ftp transfer. Content transferred from a machine to a PC could not be "played" in the transferred form nor easily reverse engineered. Transferring the content into another server of the same brand reversed the changes and was playable on the second machine.

Still, network distribution of digital cinema content was envisioned by all parties from the start. File size was a significant factor. Image compression was significant but a compressed feature could range in size from 40-75 GigaBytes. Today, file size is much greater, often approaching 200 GigaBytes per feature title. In 2001, with less than 100 digital cinemas in existence, a feature title was shipped to each location on data tapes or datadvds. With an industry goal of hundreds of thousands of digital cinemas globally, such a practice was inconceivable. Network distribution primarily by satellite multicast was the only practical solution, although the high cost of satellite transponder use was an economic barrier.

TJoy, a Japanese exhibitor, was an early adopter and industry pioneer from the beginning. In 2002, in cooperation with the Japanese Satellite company, JSAT, they began experimenting with networked delivery. They surmounted the operating cost problem by using what today is called "flex-bandwidth". Content was transmitted through the satellite using idle bandwidth. If the satellite was contracted for a 80% load during certain times, most of the remaining capacity was allocated to TJoy's delivery to their 30+ sites around the islands of Japan. If the contracted throughput was only 50%, most typically at night, more capacity was allocated to TJoy. JSAT's "flex-bandwidth" approach allowed them to offer TJoy a cost-effective transmission rate, while fully utilizing their expensive satellite's capabilities.

Technically, the initial efforts were more than disappointing. Attempts to multicast from one cinema player directly to many, using ftp, just didn't work. The JSAT staff involved were "classic network specialists" unfamiliar with variants of cinema networking. Schiffman went to Japan to assist. Content was uploaded not directly from a player, but from a networked computer, multicast to networked computer at the cinemas, then transferred into the cinema servers. Reliability improved significantly. Error checking or validation at the cinemas was another issue. As described above, if transmission errors occurred (and they did) the problem wasn't discovered until the material was tested by playing it at the cinema. If the play back wasn't valid, the entire title had to be re-transmitted.

A method of cutting the larger cinema master files into smaller ones was developed. The original file was sub-divided into 4 GigaByte section before being multicast, and reassembled correctly at the receiving end. JSAT developed file validation software, similar to a check-sum. When transmission errors occurred, only the necessary smaller files were resent.

The drawing above depicts the basics of the operational TJoy JSAT delivery network, the first of its kind in the world. The ISDN lines are used for remote access and validation. Today the system is fully automated and still in service.

Since the adoption of DCI, now SMPTE specifications, the need for and understanding of dcinema networking has become even more important, not just for distribution but for mastering.

DCI mastering entails the encoding of uncompressed source graphics frames and audio into a secure, interoperable file set, playable on any current current digital cinema player in the world. The uncompressed source material may exceed 1.5 terabytes in size, and a mastering system may be running multiple projects in its workflow. Below is a mastering workflow diagram Schiffman did for a consulting project. Not all the necessary file servers are shown here, but the diagram should indicate the need for a coherent, efficient, secure network plan for mastering.

In today's dcinema distribution model, and into the future, network-based distribution plays a much more extensive role.  The JSAT example above depicts just one or two digital projection systems per location. The industry vision is eventually replacement of nearly all film-based distribution with digital, except for a few "Art Houses" catering to the film "purist". As such a process evolves, it would be inefficient to deliver network-based media to individual players located on the projection floor.

A more effective approach would involve network-based delivery to a central cache server either at a cineplex, or even regionally. The diagram below shows an example of such an approach.

 

The Cache server could be located at a regional location or at a cineplex. The show control workstation enables the local operates to add local advertising, promos, schedule screenings, and migrate the content to the any player in the projection booth.  In a regional model, the booth network switches would be cache servers at the cineplexes and a similar network toplology would be implemented at the cineplex, for distribution to the local players.  A regional distribution center would reduce the local scheduling burden and reduce the network traffic load. A mulitcast "feed" would be targeted at a few regional Network Operations Centers which in turn would support delivery to the cineplexes in their area of responsibility.

In either case, expert technical support of the entire network from the first NOC to the most distant player, including remote access and management would be critical to the reliability of delivery operations. The the author's knowledge, no such complete end-to-end network is in operation today, although some distribution operators such as Microspace Inc. come very close to fulfilling this goal. The "gaps" which still exist are more related to logging, alerts, and back-channel confirmation of events. In other words, Microspace for example can deliver media all the way through such a system to the play server but due to interoperability conditions between all the connected devices, they may not be able to automatically access or retrieve data confirming all the necessary events. Other distribution networks may also offer similarly complete services but if so, that information is not available to this author.