← All articles

On-site data management at festivals: what it is, why it matters

18 June 20243 min read

Dozens of photographers, hours of video to ingest, social teams posting live. How to keep a major festival's media pipeline running without losing a single frame.

Of all the work we do, on-site data management at festivals is the one with no take-backs. When 60,000 people dance in a field, there is no second chance for the content. The pipeline handling photos and video must run all three days, at 4 a.m. as well as at noon, under rain and through power flickers.

The real-world scenario

A 100,000+ attendee festival with 30 accredited photographers and 10 videographers typically generates 3-5 TB of material per day. There are at least three customers for that material: the social team (must post live), the artists (want their content right after the show), and the main archive (for the aftermovie and sponsors).

Without a pipeline you get the usual: lost SD cards, files dumped on a single drive that fails, social team waiting hours, artists pinging management asking for the photos.

The architecture we use

Ingest stations

Dedicated Linux workstations with USB-C/SD/CFexpress readers, one per operator group. Every card is copied with checksum verification (xxhash or blake3) to two destinations in parallel: primary NAS + operator's local DAS.

Storage tiers

  • DAS on workstations for fast editing (Thunderbolt, NVMe).
  • Synology or Ugreen NAS in RAID 6 as working storage, reachable across backstage via 10 GbE.
  • Central SAN for editing projects and shared rendering.
  • Nightly backup to an offsite location (datacentre or client server).

Network

10 GbE backbone on Ubiquiti UniFi, dedicated VLANs for ingest, editing, social, guests. No Wi-Fi for heavy flows — cable only. Redundant switches at critical nodes.

Power

Every critical node is on an Eaton UPS with enough runtime for an orderly shutdown. A sudden mains drop on a NAS mid-write can corrupt a volume.

Three moments where things break

1. The first hour

The first wave of cards lands together. If ingest stations are not pre-configured and operators lack a clear protocol, a queue forms. Counter-measures: device labelling, auto-copy scripts on card insert, pipeline dry runs days in advance.

2. The social posting peak

Between main acts the social team pushes a huge volume of content. The NAS must sustain concurrent reads from 4-6 editors plus the social manager exporting. Undersized networking creates a silent queue.

3. The festival's tail

When everything is winding down, attention drops. That is when things break. The final backup and archive integrity check must be part of the plan, not a "we'll deal with it on Monday".

What we take home each year

Every edition we run a retrospective: what worked, what did not, where we lost time. The pipeline we use today is the result of six seasons of iteration. It is not perfect — it never will be. But it is solid enough that when the director asks for the aftermovie, we already know where to look.