Anime and A.I.
June 10, 2023 · 0 comments
By Andrew Osmond.
John Lasseter, director of Toy Story, told a story about his first brush with computer animation. It was when he was a young animator at Disney, and fired up by the new technology; he wanted to make a hybrid movie with hand-animated characters and CG backgrounds. He pitched his ideas to the studio execs… and was met with blank hostility. Lasseter claimed one exec told him, “The only reason to do computer animation is if we can do it faster or cheaper.” Lasseter was fired that day.
Decades later, Lasseter’s story resonates when people talk of computers’ potential to transform animation all over again – this time with Artificial Intelligence. But there are two opposite lessons to draw. One is to see A.I. from Lasseter’s perspective, as a fantastic toolkit to make animation in new ways. The other, far more cynical, viewpoint is that of the mean executive. A.I. animation’s value lies in that it’s fast and cheap. And if is fast and cheap, then it’ll be automated to make human artists irrelevant, and execs will chuck them through the door after Lasseter.
In Japan, anime is notorious for exploiting artists and paying insulting pittances for their labour. In that context, A.I. animation can look like a terrible coup de grace. It’s small wonder that some recent computer-aided projects have been met with fury and vitriol from fans and industry pros.
Some context is essential. Animation is no different from any other capitalist industry; there’s always been a push to make it fast and cheap. Even the cel animation beloved by traditionalists developed a century ago for pragmatic reasons. It was quicker than tracing whole drawings again and again, as in the pioneering films by Winsor McCay (Gertie the Dinosaur).
Other techniques developed for the same reason. A crucial one was rotoscoping, in which animators traced live-action frames of people to get round the difficulties of animating human figures. In 1960s Japan, Osamu Tezuka established the “image bank” system of recyclable character cels, stock poses and movements that could be used again and again, rather than drawing them anew. Both rotoscoping and the “image bank” anticipate today’s controversies around A.I.
In the 2000s, I saw a demo of an animation software for home users. It let you create a few images of, say, a bouncing ball and then let the software fill the in-betweens. This was “A.I. animation” for a mass audience. It was little different, in principle, from what CG animation was doing by the early 1990s. A 3D ballroom could be viewed from any angle, as in Disney’s Beauty and the Beast. The stampeding wildebeest in The Lion King were steered by animators using what the film’s press notes called a “library of behaviour.”
A.I. Animation was no overnight revolution, but something audiences saw evolving in real time. By 2001, Disney’s wildebeest had evolved into the giant armies in Peter Jackson’s Lord of the Rings films. Seventeen years later, Mamoru Hosoda used the same principles to generate shoals of fish for a brief sequence in Mirai. Around the same time, the CG animator Yuhei Sakuragi used algorithms to generate crowds in his science-fiction film The Relative Worlds.
Sakuragi also tried using the software (described as “automated deep learning technology”) to design the robot creatures in the film. He told me he wanted “to see what the machine learning could do, in terms of design, that humans couldn’t do.” The results were ambivalent. “If you’re aiming to have a hundred per cent finished product just with deep learning, then you would struggle… You should probably aim for 50 or 60 per cent of completion, then shape it with human hands afterwards.”
Of course a learning technology needed teaching materials, but gathering them, in Sakuragi’s words, was “very time-consuming and costly, so at this stage, it’s not very viable.” For example, Sakuragi considered using deep learning to generate backgrounds from rough drafts. But he was told the system would need at least a hundred thousand sets of roughs taken to completion – which would be enough to finish the film anyway! Sakuragi acknowledged, though, that if the technology did have a long period of study, then it could be very useful…
That brings us to the present, and the arguments raging round a couple of productions in early 2023. The first was a short cartoon produced by Netflix Japan and put out at the end of January. It was part of a programme (Anime Creators’ Base) whose stated purpose was to develop new talent, as well as training young animators on new tools. However, the film, called Dog and Boy, focused attention when Netflix Japan Tweeted proudly that its backgrounds were created by “Image Generation Technology,” because of what it referred to as a “labour shortage.”
The online reaction was not good, although it was likely made much worse by the last phrase. There’s a high awareness today of how tough life is for animators, even parodied in the mainstream (as in Banksy’s guest intro to The Simpsons). Netflix’s Tweet was presumably meant in the spirit of Lasseter’s enthused pitch to Disney – look what this technology can do! It was received, though, in the spirit of the Scrooge-ish exec. Look how we make animation fast and cheaply, and cut even more labour from the workforce.
It’s arguable that the backlash against Dog and Boy wasn’t about the principle of using A.I to make backgrounds. It’s doubtful that many of the backlashers would have ranted against established dodges in animation, like rotoscoping or Tezuka’s image bank. You could say the backlash was provoked by Netflix’s phrasing, by bad PR. It would be easy to mount a defence, starting with the brutal levels of content that commercial animators are asked to produce. Rather than worsening their animators’ situations, A.I. might improve them, by making those workloads manageable.
However, there was another charge laid against Dog and Boy. Before I get to it, I should bring in the other recent cartoon that caused a Twitterstorm. In February 2023, a few weeks after Dog and Boy, a small American VFX studio called Corridor Digital released its own animation to YouTube. It was called Anime Rock, Paper and Scissors. Amid the violent backlash that followed, few pundits were willing to concede the film had a really funny premise – turning rock-paper-scissors into an apocalyptic duel.
Fundamentally, this was an old-style rotoscoped film. Two live (over)actors performed in front of a greenscreen, before they were “turned into” animation. It was how this transformation was done that caused a ruckus.
The tiny team used an open software “diffusion” process that reinterpreted live-action images in a desired style. As the Corridor team made clear in their making-of, the style they wanted was “anime” and so they “educated” their software by feeding it art from one specific anime film, Vampire Hunter D: Bloodlust, directed by Yoshiaki Kawajiri. (Another point that few of the backlashers acknowledged is that the team was open about this, rather than suggesting they created the style themselves.)
When the film was released, the pundits strove to outdo each other in vitriol. Jade King of gamer.com called it “A Moral Betrayal of Everything Animation Stands For”, while Kotaku’s Isaiah Colbert blasted the film as ““a soulless recreation of animation techniques haphazardly strewn together without any technical skill or artistic merit.” The anime YouTube Mother’s Basement (aka Geoff Thew) commented that “Corridor Digital’s trained model shat out a TikTok filter-looking mess,” deeming all the viewers who liked the result as deplorables with “little to no taste.”
That presumably included Aaron Blaise, an animator of thirty years’ standing with credits stretching from Beauty and the Beast to Wolfwalkers. In his own video response, Blaise found Anime Rock, Paper and Scissors a blast. “These guys are artists, these guys created something that was really cool.” But after praising them, Blaise moved to the genuinely worrying aspect of the film; the fact that the A.I. had been trained on the work of other artists, without permission.
This is a serious point; and yet it’s hard to forget that sampling other artists’ work without their permission has been standard practice for netziens throughout this century. You might as well say the rot started with mashups like Apocalypse Pooh; both it and Rock, Paper and Scissors “borrow” copyrighted material to make good and original jokes. Any suggestion that Rock, Paper and Scissors and Dog and Boy are just cartoon analogues of typing in a question in ChatGPT… Well, that’s ridiculous.
Rather, you could see them as extensions of Tezuka’s “image bank” or Disney’s old practice of “recycling” moments of animation, minimally adapted, from one film to another. Both Tezuka and Disney built up their own archives for such short-cuts more than fifty years ago. Meanwhile some of the greatest human animators, past and present, were celebrated for disguising material – namely, live-action reference material – and transmuting it into something new, through rotoscoping. They range from Bill Tytla animating Grumpy in Disney’s Snow White, to Shinya Ohira interpreting what was probably a live-action fight in Tarantino’s Kill Bill.
When it comes to A.I., concealment is the key. The fear is that cyber-sampling could lead to ostensibly original animations that are cyber-plagiarised from the efforts of – literally – countless people. On the one hand, this conjures up the possibility of a new technological battle. Presumably, software will also be able to detect when an animation has “learned” from another animation, as with Anime Rock, Paper & Scissors relying on Vampire Hunter D: Bloodlust.
Presumably companies will get better at hiding such learning, which means detection software will need to improve, and so on. One wonders, though, if it will be ever good enough to tell an A.I-trained animation from a deliberately close pastiche in animation – for instance, the Disney-like Prince of Egypt, by DreamWorks, or the Ghibli-like Mary and the Witch’s Flower, by Studio Ponoc.
Now that the A.I. technology is public, pretty much any future animation could fall under suspicion, except those that have completely original styles. To go back to Dog and Boy, that film was accused of plagiarism too, though its credits specify that the backgrounds originated as hand-drawn layouts. The Background Designer was cutely credited as “AI (+Human),” which drew more online flak, though pseudonymous credits are hardly unknown in animation. Moreover, the “Human” is quite possibly also the film’s Director or the Character Designer, who are both credited by name.
But as Geoff Thew points up in his reaction video to Anime Rock, Paper & Scissors, future cyber-plagiarisms could be legal. Just imagine a huge conglomerate with libraries of content that dwarf Disney’s pre-Internet archives, training its software on hundreds of thousands of hours. Again, it’s easy to be apocalyptic. One part-animated film (not an anime), 2013’s The Congress by Ari Folman, has already taken this scenario a step further. It imagines human movie stars being turned into digital model sheets, so that the originals are no longer needed and can be compulsorily retired.
Well, perhaps. Like AI in general, AI Animation opens up unprecedented territory, where confident predictions look laughable inside a year. It still seems possible that AI may be essentially a tool for new artists to work with, no more destructive than the rotoscope, which was also denounced by aesthetes of the time. Or perhaps AI truly foreshadows the end of animation as a human art, as the pessimists predict.
It’s the same question that was anticipated in Disney long ago. Did Lasseter have the handle on CG animation? Or was that money-pinching executive right in the end; that it’s all about making animation quick and cheap?
Andrew Osmond is the author of 100 Animated Feature Films.