Moonlight in the Lion’s Den
Why Barry Jenkins gave up his improvisational shooting style to spend four years making Mufasa on a soundstage.
Barry Jenkins knows what you’re thinking: “On what planet do I, Mr. Moonlight, make a prequel to The Lion King?” The 45-year old director of an Oscar-winning film is having lunch at Manuela, a restaurant near a production facility on Alameda Boulevard where the Disney movie in question, Mufasa, has been in the works since 2020. By October 2024, the photorealist animated project is nearing its pencils-down phase. Jenkins has spent much of the past week approving final renderings from colleagues and asking for tweaks — small ones because the whole thing has to be locked by November. He’s feeling good about the progress, but self-assurance won’t stop the reply guys.
“I can’t tweet about the Super Bowl without somebody reminding me that I’m making this fucking film,” Jenkins says. “I can’t!”
The barrage of criticism began in September 2020 when Disney confirmed that it planned to make Mufasa with Jenkins attached to direct, and it increased when Jenkins previewed footage at Disney’s fan convention in August 2024. Sentiments included “Just make real movies again instead of these stupid money grabs holy fuck“ and “Let’s not pretend Disney being able to write a check that sounds like an anvil when it hits the table isn’t the lion’s share (pun intended) of the reason he’s doing it.”
Jenkins defended himself. But did he need to? There’s no shortage of respected filmmakers who have helmed nine-figure-budgeted, IP-driven franchise movies for big studios, though some have bailed, perhaps after learning they would be the Hollywood-filmmaker equivalent of a kid in the back seat of a car holding a toy steering wheel and making vroom-vroom noises. Jenkins is frank in conceding that Mufasa was work for hire. The studio owned every aspect, from the screenplay and character designs to the songs by Lin-Manuel Miranda. Disney had final cut and expected a crowd-pleaser that would justify corporate investment in new theme-park rides, put stuffed animals in children’s hands, and sell more tickets in its first weekend than Moonlight did in a year. Add to that list of challenges the fact that Jenkins had never directed a CGI-heavy film.
“When I took this job, the idea was ‘What does Barry Jenkins know about visual effects? Why the hell would he do this movie?’ In addition to ‘Why would he be making The Lion King?’” he recalls. “I think part of that I found very invigorating. People make these things, you know, with computers. So anybody should be able to do this. Anybody, right? There’s nothing physically that says I am incapable of doing this.”
But in many ways, Jenkins’s aesthetic is antithetical to Disney’s animated films — and not just because every other movie he made before Mufasa has been rated R. His style is shaped by what’s physically in front of him; he goes into a shooting environment with a camera and a plan, then remakes everything on the fly, working in the tradition of John Cassavetes, Spike Lee, Martin Scorsese, Wong Kar-wai, and other directors whose films he treats as sacred texts. His choices are informed not just by the performances he gets from his actors but by the streets, buildings, oceans, mountains, sunlight, and moonlight surrounding them. He gave up this kind of improvisational shooting method when he decided to direct Mufasa, which was shot on a barren motion-capture stage. Then he spent the next few years figuring out ways to bring it back.
Jenkins’s motion-capture stage is not elevated like a platform in a theater; it’s a flat expanse in a high-ceilinged room the size of a high-school gymnasium, with lighting crossbars overhead and a raggedy, paper-taped grid on the floor. I wasn’t allowed to read a script prior to being invited here. Even now, having seen only 30 out-of-sequence minutes of Mufasa, I can’t speculate on what the movie’s story might be because I had to sign a nondisclosure agreement. But a Disney-approved summary describes it as an epic journey built around the burgeoning friendship between the young, orphaned Mufasa and a lion named Taka, “heir to a royal bloodline.” It’s told in flashback to Simba and Nala’s daughter, Kiara, by the soothsaying mandrill Rafiki, with the meerkat Timon and the warthog Pumbaa goofing on the sidelines.
I point to the ceiling overhead and ask when the movie lights were taken down. Jenkins says there were never any movie lights, just the regular bulbs that were in the fixtures when they moved in November 2021. I ask when the sets were torn down. Jenkins says there were no sets over the course of the 147 scattered days he, a team of animators, and his regular cinematographer, James Laxton — who, like most of Jenkins’s core group, met at Florida State University’s film school in the 1990s — shot the film. (Production ended in August 2023.) There weren’t even fragments of sets like you’d see in behind-the-scenes clips of effects-driven franchises like Marvel or DC, or for that matter, previous “live-action animation” Disney movies like Jon Favreau’s The Jungle Book, which kicked off Disney’s craze for remaking its 2-D library in 2016. No pieces of Pride Rock, no fiberglass miniature of a canyon. Nothing.
But both Jenkins’s office and the workspace of his editor and close friend Joi McMillon were stocked with visual reference materials, including color reproductions of paintings by Sungi Mlengeya and Calida Rawles, photos by Jeremy Snell and Jalan and Jibril Durimel, and critic John Powers’s book about Wong Kar-wai, WKW. A wall of Mufasa concept art includes a first-person close-up of Rafiki seeming to make eye contact with the spectator. This kind of shot appears in all of Jenkins’s work: Think of the alternating close-ups of the young Chiron and his mother in Moonlight when she tells him she’s worried about him, or the moment in If Beale Street Could Talk when Tish stares beseechingly at the back of her mother Sharon’s head in their small kitchen and her mother seems to sense the power of the gaze and turns around to return the look.
I’ll spend the next two days in an editing suite and a medium-size screening room with Jenkins, McMillon, and their collaborators. The sequences they show me include several large-scale action scenes, a dream that involves an ice cave (which I’m told was later cut from the movie), a “walk and talk” with young Mufasa and Rafiki, and a scene with the same characters set at a watering hole. At first glance, they seem like variations on Favreau’s 2019 remake of the original Lion King, which had a more observational tone reminiscent of a wildlife documentary rather than Disney’s mostly hand-drawn ’90s hits. The animals and their environments in Mufasa are similarly photorealistic, save for the slight anthropomorphizing body language and, of course, the human voices and “lip movements.”
But there are subtle surrealistic touches here that I don’t recall seeing in Mufasa’s predecessor, like a cloud formation that takes the shape of a lion’s head and a water reflection that briefly pictures Mufasa’s family and himself as a cub. There are also many long, gliding, unbroken takes that float gracefully toward and around the characters, evoking not the typical Disney-cartoon visual grammar but techniques perfected by so-called slow-cinema masters like Béla Tarr, Jia Zhangke and Gus Van Sant. Jenkins got a note from the parent company worrying that one of the long takes played a little “slow,” but there was no indication that he was required to implement it.
“We were trying to do these scenes in as few shots as possible,” Jenkins explains, “even though we didn’t have to think that way.”
Will future generations of cinephiles watch Sátántangó and wonder why it reminds them of their childhood? Probably not, but this is the first Disney animal picture to spark such a thought.
Long before Jenkins set foot in his Alameda Boulevard facility, he received Jeff Nathanson’s script from Disney. It was July 2020, shortly before he was scheduled to go on vacation with his partner, Lulu Wang, director of The Farewell. “My thought was, Oh, I’ll just give this a few days and I’ll call my agent and tell them I’ve read it and I’m not going to do this project,” Jenkins says. He and Wang drove up the coast to wine country. All the while, the script was sitting in the back of his head. When he got home, he says, he remembered, “Okay, shit, that’s right! I have to call my agents tomorrow and remind them that I’m not going to do this project.” That’s when Wang asked, “Are you afraid to read it?” He told himself he would read five pages but ended up reading 50: “I turn to Lulu and go, ‘Holy shit, this is good.’”
After expressing his bated interest to Disney, Jenkins learned Mufasa would require at least three exclusively committed years of his life. That was a daunting temporal price tag, especially for Jenkins’s team — including Laxton, McMillon, producers Mark Ceryak and Adele Romanski, and production designer Mark Friedberg — all of whom had predictable concerns about photorealistically rendered Disney animal movies. In the abstract, Romanski says, Mufasa “just didn’t feel like our kind of filmmaking, so there was a large forum for talking about the ways we were gonna push it forward if we were going to do it.”
It wasn’t necessarily the chance to affect a new medium that tipped the crew in favor of Mufasa, though that increasingly became a draw: Jenkins says they were all at a point in their lives where they were willing to trade chaos for control. They had just ground their way through postproduction on The Underground Railroad, a project so complex, expensive, and freighted with behind-the-scenes drama that, had a few more things gone wrong, it could’ve ended up being Jenkins’s own Heaven’s Gate. Jenkins tells me Amazon declared the series tens of millions of dollars over budget eight weeks before he and his team had shot a frame — after having built all the titular railroad sets first — and threatened to pull the plug if the number didn’t come down. The Underground Railroad was ultimately completed in a manner befitting Jenkins’s original vision but not without ruthless triage (including filming the entire series in Georgia rather than across multiple states), plus oversight from a new line producer hired by Amazon who was tasked with helping them make the show with the money they had left. (Amazon MGM Studios did not respond to a request for comment.)
After all that anxiety, the idea of doing an entirely digital movie didn’t sound so bad. Jenkins also thought it would be good for his personal life to work in Los Angeles, where he shares a house with Wang, because even though he’d been in the relationship for two years at that point, they were often geographically separated, “with me in Georgia and Lulu in Hong Kong or wherever.” In short: “I needed to slow … the fuck … down.”
So in late 2020 and early 2021, Jenkins and his team joined forces with Disney artists, including visual effects supervisor Adam Valdez and animation director Daniel Fotheringham (both of whom had worked on Favreau’s The Lion King), to make a test sequence. “The first test piece that we did for our own education was a scene under a shade tree,” producer Romanski says. “That was for us to understand how to build, from start to finish, one scene from the movie.” They used a “garage band” version of the technology they would end up using on the movie, says Valdez. “We kind of kit-bashed it together.”
The scene was set in a virtual space created by Friedberg and his art department using Unreal Engine, a 3-D image-creation system that’s used in multiplayer video games like Fortnite that had been adapted by the special-effects company Industrial Light & Magic for use on effects-heavy films and TV series, including the 2019 Lion King, The Mandalorian, and Westworld. Jenkins & Co. explored Friedberg’s sets while wearing VR goggles. Meanwhile, Disney gave Laxton an Unreal VCam, a virtual camera that translates traditional cinematography into a virtual setting. Imagine a first-person video game except that you’re recording the environment rather than playing a game and that the equivalence between your movements in the physical world and your camera’s movements in the virtual grasslands are more responsive and precise.
Laxton learned to use the camera in his house. He could stroll through the virtual environment a step at a time or, with the push of a button, cover 100 meters in one step or even fly. “If you want to be a helicopter that’s circling something and you’re walking in a ten-foot circle,” says Valdez, “you can decide that’s a 100-foot circle and stabilize the movements so you aren’t bouncing up and down as you walk.” A small team assembled for the actual shoot in the same Alameda space, resulting in what Jenkins calls “a rough-draft version” timed to voice recordings from “a placeholder cast” of nonprofessionals. Then, over a span of months, a team of animators and special-effects artists in London completed the sequence, fragments of which were actually used in the finished film.
The test sequence, Romanski says, “was so hard to wrap your head around, until you went through it and thought, Oh, it’s just moviemaking, but it’s virtual.” Jenkins says he acclimated relatively quickly to the demands of the process, perhaps because he’d been playing video games all his life. It helped that he consulted other filmmakers who were old hands at this type of production, including Favreau, Matt Reeves (The Batman, War for the Planet of the Apes), and his old Florida State film-school classmate Wes Ball (the Maze Runner films, Kingdom of the Planet of the Apes), and visited the set of James Cameron’s Avatar: The Way of Water.
“Had any of us done a virtual film, a franchise film, a Disney film, a Disney legacy film?” Romanski asks. “No! But isn’t it kind of boring to do the same thing over and over again?”
From there, Jenkins’s team was off to the races, at first following in the footsteps of forerunner Favreau, whose 2016 Jungle Book featured a live-action star, Neel Sethi, and utilized fragments of sets to orient the young Mowgli — a bit of “river-carved rock” here, a patch of artificial “jungle” there. But the director’s next foray into the process, the 2019 remake of The Lion King, had no humans in it at all and, therefore, no physical sets. It was shot like the Mufasa test sequence: on virtual terrain that had been created with 3-D imaging software, which crew members (including cinematographer Caleb Deschanel) traversed while wearing VR goggles or looking at the monitor of a handheld VCam. Filmmakers would “scout” locations this way and plan blocking as if they were moving through a real place.
This aspect of the process was intriguing to Jenkins, who “has an interest and inclination to discover things, often on set,” says producer Ceryak, and “lets the space speak to him.” It was common for Jenkins to be exploring Friedberg’s virtual grasslands and excitedly shout for his team to come over and see the incredible thing he had just stumbled upon. “We would be focusing on an area for a scene, a specific passageway or valley or whatever, where these characters are walking and talking,” says Ceryak, “and then suddenly he’s off in the corner, underwater, behind a rock — like, calling out to James to fly over and see this thing he’s found that Friedberg had built, but not with any intention of it ever being onscreen or in the movie.”
In early 2021, the official cast — including Beyoncé, Aaron Pierre, Mads Mikkelsen, Keith David, and Thandiwe Newton — delivered their lines into microphones under Jenkins’s direction in studios across the United States and internationally. Video cameras recorded the actors’ facial expressions and body language to create visual references for the animators down the line. Editor McMillon then cut the best audio takes together to create what Jenkins calls the “radio play” version of Mufasa, which was fused with roughly animated storyboards (12 frames per second or fewer; Jenkins calls it “the flip-book version”) to produce an animatic. The animatic gives the filmmakers an idea of how the different scenes and sequences fit together and a sense of whether there are any pacing issues.
The part of Mufasa’s production that veered from the path — and made it possible for Jenkins and his team to make a Barry Jenkins film with talking animals — was the inclusion of “quadcapping,” short for “quadruped capture.” This process records the movements of bipedal animators in black bodysuits marked with sensor dots as they move around a motion-capture stage and instantly translates them into four-legged creatures on the movie’s virtual set. The Mufasa crew calls this part of the process “staging,” like the part of a theatrical production when a director decides how onstage action will be choreographed in relation to the proscenium arch. It was tough to stage Favreau’s The Lion King, because the four-legged characters were created in a multistep process that has been the standard on digital movies dating back to the Star Wars prequels. From animatics, these productions progressed to 3-D wire frames or geometric, blocky figures, then to densely rendered creatures with fur, feathers, or scales whose movements and textures drew on data from nature. There were never any stand-ins for animals during the shoot for Favreau’s Lion King, just digital placeholder versions — so crude that, as Fotheringham puts it, they “looked more like sliding chess pieces.”
Early on in his shoots, Jenkins found himself hampered by having no way to work spontaneously on set. “James was pointing the camera at literally nothing,” says Romanski. “No people, nothing.” Luckily, they worked on the movie for so long that, eventually, “the tech caught up with our process.” About halfway through production, quadcapping entered the picture with its ability to translate the movements of animators in bodysuits (“a rotating group,” Jenkins says, though Fotheringham “was in suits the most”) into virtual animals moving through Friedberg’s virtual Africa. “You could tell when a lion was nervous, embarrassed, excited, or upset simply by the way it moved, and the camera in turn could respond to movement, anticipate movement, and integrate into the scene,” Fotheringham explains. “If a character in anger storms toward the camera, the camera operator reacts to this movement and you can feel the tension immediately through the lens.”
During quadcap shoots, a PA system would blast the “radio play” version of the scenes to inspire the animators in bodysuits and give them a sense of how their actions should be timed as they moved across the floor. Jenkins would give instructions and feedback on what he saw in the real and virtual spaces. “The motion-capture stage was laid out with large grid lines with coordinates like A7 and G5,” says Fotheringham. “This grid was then projected in the virtual world where everyone could see the performers as lions.” For scenes involving bigger obstacles, like Pride Rock or a tree, their system would adaptively place the lions at different heights or translate animators walking in a circle around the stage to make them walk in a straight line over a series of hills. “Think of a tartan blanket that is laid over pillows,” Fotheringham says. “It will distort over the pillows, but the tartan grid remains. Our virtual lions would follow this distorted grid similarly.” Animators at workstations surrounding the stage could see the results in their monitors, finesse movements, make real-time corrections, and add missing information. Valdez calls them “the rapid-response team.”
As he directed it all, Jenkins had a realization: “With computer animation, we assume the animators are writing with code. But with quadcapping, the animators are getting the first draft of the animation when they’re wearing the suits, so it’s like they’re writing with their bodies.” (Alas, for large-scale action sequences involving lots of animals, such as a flood that occurs at a key moment in Mufasa, the team had to embrace the 2019 methods of animation because it was impossible to shoot more than four animators in bodysuits on the stage simultaneously: Neither the space nor the technology had expanded enough for that.)
The resultant quadcapped imagery is nowhere good enough to show in a theater to paying customers. Jenkins calls it “the PS3 footage,” and he was irritated when some of it leaked earlier this year. It hadn’t yet been handed off to McMillon, who would edit the PS3 footage and pass it to the animators and visual-effects artists supervised by Fotheringham and Valdez, then to an army of colorists, texture painters, virtual-lighting designers, and other specialists tasked with whipping every shot into exhibitable shape. Valdez says this part of the process takes about six weeks per scene, not including revisions and tweaks. After that, there might still be more notes from Jenkins asking for a small change or adjustment. A shot might go back to McMillon to be recut and then to the army to be further altered. In some cases, Jenkins and Laxton might go back onto the stage to reshoot all or part of a scene if they felt the previous version wasn’t working and no amount of postproduction buffing would fix it.
“I think ‘stacked’ is a really interesting way to describe it,” Jenkins says of the production in its entirety, “because any part of the film could be either in preproduction, production, postproduction, or finishing at any moment. Any scene can be in any one of those different phases.”
The magic trick of Mufasa, as Fotheringham puts it, was its ability to create a stacked animation world that “a filmmaker from any walk of life could just come into — like, someone who hadn’t done animation before — and just basically get in there and make a film.”
My last day at Mufasa headquarters is spent sitting between Jenkins and McMillon in the screening room where final-stage shots are being presented for the director’s approval. Jenkins is talking simultaneously with the projectionist, two department heads on Zoom, and a half-dozen people in the room. He is generous with praise, though he likes to bust chops. (When he introduces me to Fotheringham, he warns him, “This man has seen you in a bodysuit.” And his team responds in kind: When Jenkins asks visual-effects producer Mark St. John to tell me his job title, St. John replies “Doormat” and the room erupts in laughter.) Over the next three hours, Jenkins moves swiftly through all the footage presented to him. The one-word response “Approved” means a sequence is ready, and this is the word he uses more than any other. But for one scene, he asks that a slightly shorter lens be applied to a shot of a lion to exaggerate the perspective. For another, he requests a surrealistic sound touch be toned down because he wants to make the audience work for meaning rather than hand it to them.
Jenkins’s mantra throughout the making of Mufasa was “Imperfection is your friend.” He and Laxton relished unforced touches that made the work feel more physical, such as a violent camera shake that happened when Laxton jumped to follow a lion leaping down a series of tree branches. They had to push back against the Disney reflex to make things immaculate. Early on, Laxton stumbled while shooting another action scene: “There was this great moment where the lion kids are racing and the one kid catches up, and just because of where the camera is and where the kid is, he runs this puddle and the sun just catches it and everything goes bright white, like he runs through a flare,” Jenkins recalls. “It looks like the camera operator is on the back of a pickup truck that’s going 70 miles an hour through a muddy field and is really struggling to hold the focus.” Laxton and Jenkins loved the mistake because it sold the illusion that the audience was watching an actual event being recorded under pressure. When the footage animators removed the momentary loss of control, Jenkins says he asked that the mistake be put back in: “Don’t smooth the shit out. Right?”
“We want just something that has texture, something that feels organic,” he says. “And sometimes that can be the hardest thing to dial in because every single blade of grass has to be created by someone. But you ultimately don’t want everything to feel like it’s been created by anyone. You want it to feel like it naturally arose.”
When I ask Jenkins to name the biggest skill set he picked up on Mufasa, he deadpans, “Math.” I’d seen evidence of this when Jenkins explained that the budget of Mufasa had been calculated on how much labor was required to make a shot good enough to show on a large theater screen. To demonstrate the calculus, he takes out an old-school gray plastic calculator and breaks down the budget by cost per shot, cost per minute of screen time, and cost per frame, fingers clickety-clacking on the keys. He admitted to craving control post–The Underground Railroad, and he’d seemingly found it.
When I ask members of Jenkins’s inner circle whether they want to make a movie like this again, the response is a measured expression of reluctance. It seems clear that, as much as they appreciate having conquered a novel filmmaking experience and acquired new abilities, they’re likely not doing another all-digital movie next, nor another with a nine-figure budget. Romanski says there’s a possibility that Jenkins will direct a biopic about choreographer Alvin Ailey for Fox Searchlight and that it’s “not going to be a $250 million movie, right? So we’re going to have to go back to embracing a much more limited tool set on that film.”
“It is not my thing,” Jenkins says of all-digital filmmaking, then repeats more emphatically, “It is not my thing. I want to work the other way again, where I want to physically get everything there. I always believe that what is here is enough, and let me just figure out what is the chemistry to make alchemy? How can these people, this light, this environment, come together to create an image that is moving, that is beautiful, that creates a text that is deep enough, dense enough, rich enough to speak to someone?” He can’t do that in a studio. Even now.
But at the same time, Jenkins doesn’t rule out the possibility of using Mufasa methods to figure out new ways to make Barry Jenkins movies. Somehow, we land on the subject of the Muppets and suddenly he’s imagining how he could direct puppet performers and transpose them onto virtual sets. “You know, a Muppet movie done in this style would be awesome. Awesome. In the same way we generate our PlayStation version of a scene, you could have a set that’s just the actual physical puppeteers, and Muppets are blocking the scene but just in a black box, you know?” he says. “Or, let’s say, a green box. You’re capturing their performances and then you’re putting them all into virtual sets. I can see how that could work.”
Related