In the fall of 2018, Peter Jackson released his documentary on WWI, "They Shall Not Grow Old," to its initial limited release in theaters before becoming a staple of streaming websites. Given access to the newsreel footage of England’s Imperial War Museum and carte blanche to create any documentary he saw fit with it to commemorate the centennial of WWI’s armistice, Jackson’s technical crew did a brilliant job of restoring footage that was shot during the earliest days of movie newsreels. His team stabilized the footage, corrected the speed of miles of hand-cranked images, and colorized the black and white footage, to make WWI accessible to a new generation of audiences who lack the patience to muddle through ancient, grainy black and white footage. Jackson also used his own collection of WWI-era guns and artillery(!) to create sound effects that are as period-accurate as possible, and even hired forensic lip readers, who make their day-to-day living studying security camera footage of burglars, to ascertain what was being said by the men in the century-old silent footage his team had restored.
The recent Netflix series "World War II: From the Front Lines," narrated by John Boyega, who played Finn in the recent "Star Wars" trilogy, attempts to do much the same for the next World War. However, taken together with Jackson’s "They Shall Not Grow Old," the two documentaries raise questions about how future generations will look back on the footage of the 20th century, and the authenticity of what they will be seeing – and quite possibly, the lack thereof.
The classic 1970s Thames Television WWII miniseries "The World at War" used the footage of the Imperial War Museum and numerous other stock footage libraries to tell the history of WWII as had never been explored on television before. However, because film restoration technology was somewhere between non-existent and in its absolutely infancy, the black and white newsreel footage "The World at War" used was most assuredly the real thing, and not digitally processed and colorized to a fare-the-well. Because of the role of the battlefield cameraman, the footage was rarely as “in your face” as something shot by Hollywood for a dramatic war movie, but it was believable because it was real.
In contrast, "World War II: From the Front Lines" takes wartime footage that was much more competently shot than footage from the previous war, and massively overcooks the processing, often to absurdly surrealistic ends, with shots that seem almost psychedelic in the end result. Even more so than Peter Jackson’s reworking of WWI footage, it might make this material more palatable to 21st century audiences, but at the cost of diluting the original footage that’s somewhere at the base of the producers’ digital processing.
This trailer gives only a hint of how much processing has been slathered over some of the shots seen during the Netflix miniseries, but it does highlight another issue with the footage. As with Peter Jackson’s WWI documentary, "World War II: From the Front Lines" recomposites the original 4X3 footage into the widescreen 16X9 aspect ratio used by most 21st century HDTV sets, to make the footage that much more appealing to Netflix viewers, with little care that 1940s-era audiences would not have viewed footage in this screen format:
Tomorrow Never Knows
If we can’t trust film these days, what about sound? The recent remixes of the Beatles’ best-selling albums of the 1960s raise the same questions about audio as the aforementioned WWI and WWII documentaries do about film. Stereo mixes for pop and rock LPs were rare until the late 1960s, as mono record players were far more common than stereo, at least until stereophonic FM radio became a staple of the underground rock world in the late 1960s. But even beyond going from mono to stereo, remixing a 50-year old album allows for a mix to be spread out in a manner that more primitive mixing desks wouldn’t have allowed in 1967, and deeper bass sounds than what was necessary to prevent the needles playing vinyl LP records from skipping out of their grooves back in the day. However, if taken too far, they run the risk of creating an experience that mid-‘60s
When remixing "Sgt. Pepper’s Lonely Hearts Club Band" for its 50th anniversary in 2017, Giles Martin, the son of legendary Beatles producer George Martin, had access to all of the individual elements the Beatles had created for their classic album. This was because shortly before the Beatles began work on "Sgt. Pepper," management at EMI, their record label, had some inkling of the historic nature of the work the band was creating, and had ordered their engineers to make safety copies of each track the band recorded before they were permanently bounced down onto studio’s four-track recorders.
However, no such edict existed when the Beatles recorded their equally groundbreaking "Revolver" the previous year, and the album was mixed using four-track recordings that contained bounces of numerous earlier tracks that until recent technological developments could not be isolated and recovered as separate elements:
In order to rebuild these recordings, Giles Martin was forced to use the “remix/demix” technology that Peter Jackson’s had advanced for the Beatles’ 2020’s "Get Back" project, and which Jackson dubbed “MAL,” as a triple homage to HAL, the supercomputer in Stanley Kubrick’s epochal 1968 film "2001: A Space Odyssey," Mal Evans, the Beatles’ legendary road manager and aide de camp, and the phrase “Machine-Assisted Learning,” a variation on the now-ubiquitous AI sobriquet.
The "Get Back" miniseries radically reshaped the miles of 16mm footage that director Michael Lindsay Hogg had shot for "Let It Be," the Beatles’ 1970 final film, replacing the original’s melancholic reaction to the end of the Fab Four, with something approaching their fun-filled early days, and with an actual happy ending. (The two surviving Beatles and the spouses of John Lennon and George Harrison have been putting the kibosh on releasing the original "Let It Be" to DVD or Blu-Ray for two decades now.)
Jackson’s team may have perfected demixing and remixing technology, but they didn’t invent it. In his 2023 book, "Abbey Road: The Inside Story of the World's Most Famous Recording Studio," David Hepworth wrote:
James Clarke is a New Zealander who started working at Abbey Road as a software engineer. Because he came from this kind of background he was more prepared than most to regard everything he dealt with as scientific rather than emotional information. To that end he was talking to sound engineers in the canteen at Abbey Road one day in 2009 and asked whether it would be at all possible to take mixed recordings and, so to speak, ‘demix’ them. What he was wondering was whether it would be feasible to take classic records, which had been recorded on two or four tracks and then mixed down into one mono track, and in some way separate them into their constituent parts. They laughed, saying this was the Holy Grail of sound recordings and couldn’t possibly be done. Once a multi-track recording had been mixed down to either a mono or stereo version there was no way of unmixing the paints. The best you could hope is that you could go back to the unmixed tapes, if such things still existed and had survived the hurly burly of corporate takeovers, and that was very unlikely. During the seventies and eighties nobody at record companies had ever thought they might be dealing with heritage assets.
James Clarke was not dissuaded. He reasoned that if an acute human ear could hear separate instruments on a recording it ought to be possible to develop a programme which could similarly separate them. He first applied his work on the proposed re-release of The Beatles At The Hollywood Bowl. This was a live recording that had been made in 1964 when the screaming was at its height. Furthermore, American union restrictions meant that George Martin had no role in deciding what got recorded and how. The result was a recording that was high on excitement but low on musical value. Clarke set to work. By looking at the signal as a spectrogram he was able to visually identify the vocals, the different instruments and the screams. He could see where each fell on the spectrum. Then, by treating the screams as though they were just another instrument, he was able to reduce them in the mix. He was able to put them in the background of the mix, bring the group to the front, and change the picture the sound presented.
Today, demixing and remixing, with varying degrees of audio quality, is everywhere, from the studio control rooms of major record labels in London, New York, and L.A. to the hobbyist musician’s home studio:
Perhaps with that in mind, last year, Giles Martin, Peter Jackson, and the two remaining Beatles didn’t just use demixing technology to remix old Beatles albums. They used Peter Jackson’s “MAL” technology to extract John Lennon’s vocal from one of his late ‘70s demo recordings, remove the ambient noises from Lennon’s New York apartment, and offered up the song to the world as “The Last Beatles Song.”
The Phantom Digital Menace
Hollywood has been doing something similar for deceased or aging actors via digital technology for almost a decade prior. 2016’s "Star Wars: Rogue One" seemingly offered up the late actor Peter Cushing’s image as the evil Grand Moff Tarkin, and Carrie Fisher’s image as a young radiant Princess Leia from the first Star Wars movie.
As far as Darth Vader, in 2022, given that James Earl Jones was then 91-years old, it was announced that Disney would be replacing his voice with an AI-generated replica going forward:
Actor James Earl Jones will no longer be tapped to voice his iconic Star Wars character Darth Vader thanks to artificial intelligence.
The 91-year-old actor has already provided plenty of archival recordings via the films that began in 1977, television series, animated programs, video games, and Disney’s various theme park rides. Now, Ukrainian start-up Respeecher will combine those recordings with artificial technology to provide a voice for Darth Vader that sounds similar to Jones’s voice.
Respeecher has already begun using the synthesized voice for the Disney+ series Obi-Wan Kenobi while keeping the actor in the credits of each episode the voice is featured in. Jones last voiced the character during the 2019 film Star Wars: The Rise of Skywalker.
I assume it’s only a matter of time before ESPN and NFL Films create AI versions of Howard Cosell and John Facenda (perhaps just their voices; perhaps their voices and images) to narrate pro football highlight programs.
Perhaps the most significant change to footage that millions had previously viewed was in 1997, when George Lucas used the re-releases of his first three "Star Wars" films as a sort of demonstration reel for the digital technology he was planning for his prequel trilogy beginning two years later with "The Phantom Menace." Perhaps the most extreme example of these changes were used in the scenes that introduce the iconic Mos Eisley space cantina in "Star Wars." Lucas replaced the gritty exterior footage he had shot in Tunisia in 1976 with early computer-generated digital effects that resembled a cross between a cartoon and a video game:
This new footage angered many of the die-hard original "Star Wars" fans who made up the film’s core audience, who would go on to use VHS video tapes and laser discs of the original three movies to create what became known on the Internet as the “despecialized” editions of the three films. In effect, by replacing so much iconic footage, Lucas was asking audiences the question famously asked by Chico Marx in 1933’s Duck Soup: “Well, who ya gonna believe, me, or your own eyes?”
Today, that line is usually misspoken as, “Who are you gonna believe, me, or your own lying eyes?”
While Hollywood action movies still rely on massive amounts of CGI effects, because audiences have cottoned on to these techniques, it’s become a cliché among directors to ensure viewers that little or no CGI was used in creating their favorite action scenes. As the New York Times once said about Dan Rather’s most infamous moment in 2004, it’s a “fake but accurate” cliché. Hollywood knows that audiences can spot phony CGI a mile away, but the industry still needs the technology to smooth out extremely complex visual shots and hide the seams:
Medium Overlord
In 1969, veteran Hollywood director/cinematographer Haskell Wexler released "Medium Cool," in which he attempted to intersperse documentary footage of the 1968 Democratic Convention in Chicago, with dramatic footage he shot of his actors around the Convention. In 1975 actor/director Stuart Cooper was given access to the Imperial War Museum’s footage of WWII, and shot new scenes with actors to blur real and fictitious footage to create an elegiac film about a young Brit being drafted and the runup to D-Day that Chase called "Overlord." Given the technology that is rapidly being created to serve the movie streaming world on the Internet, neither man could have predicted how much fact and fiction was about to be blurred in the following century.
In 2024, digitally processing film and video images, and digitally demixing and remixing audio are each a technology that is in its infancy. Both technologies will only become exponentially more powerful. Going forward, they each have the potential to make 20th century film and audio much more accessible to each generation of audiences. But as we move further and further away from their underlying source material, how much will we be giving up of their original authenticity, often unknowingly? How will future audiences take the postmodern trickery for granted? Will future audiences eventually rebel against what they’re being offered for something more authentic and honest? (I’d like to be wrong, but who am I kidding with that last question?)
Join the conversation as a VIP Member