Synchronized Generation of Appearance and Behavior of Cinematic Animated Characters Using Deep Generative Models
DOI:
https://doi.org/10.70917/ijcisim-2026-0086Keywords:
deep generation technique; cascade classifier; loss function; BPFA; animated character generationAbstract
The continuous development of depth generation technology has improved the efficiency and quality of animated character generation. In this paper, we construct a cascade classifier based on depth generation technology, use the nearest neighbor difference to adjust the size of the expression region and complete the feature screening to determine the face expression feature set. By designing the loss function, the style consistency of the generated expression images is maintained. Introduce foreground mask mechanism and add expression magnitude discriminator in the expression editing model to improve the quality of expression generation. Using behavioral probabilistic finite automata (BPFA) to constrain the uncertain behavior generation of animated characters, and improving the fitness of generated behaviors and expressions through probabilistic calculation. The study shows that the animated character generation frame rate of this paper's method is >90f/s, the number of textures is >60MB, and the accuracy is high in 4 angles. During expression editing, this paper's method achieves stable convergence with only 59 iterations, and has the best effect on extracting the feature points of the face in the frontal angle. The generated movie-level animated character expressions and behavioral synchronization effects are all greater than 90%.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Wei Peng

This work is licensed under a Creative Commons Attribution 4.0 International License.