The instructor and the coed
Our method revolves round an idea referred to as information distillation, which makes use of a “instructor–scholar” mannequin coaching technique. We begin with a “instructor” — a big, highly effective, pre-trained generative mannequin that’s an knowledgeable at creating the specified visible impact however is much too gradual for real-time use. The kind of instructor mannequin varies relying on the purpose. Initially, we used a custom-trained StyleGAN2 mannequin, which was skilled on our curated dataset for real-time facial results. This mannequin may very well be paired with instruments like StyleCLIP, which allowed it to govern facial options primarily based on textual content descriptions. This supplied a robust basis. As our venture superior, we transitioned to extra refined generative fashions like Google DeepMind’s Imagen. This strategic shift considerably enhanced our capabilities, enabling higher-fidelity and extra various imagery, better inventive management, and a broader vary of kinds for our on-device generative AI results.
The “scholar” is the mannequin that finally runs on the person’s system. It must be small, quick, and environment friendly. We designed a scholar mannequin with a UNet-based structure, which is great for image-to-image duties. It makes use of a MobileNet spine as its encoder, a design identified for its efficiency on cellular gadgets, paired with a decoder that makes use of MobileNet blocks.
