Lego has a stop motion app, and the last time I used it, it was very average. Minimal features for a stop-motion app. Onion skinning, variable framerates, and I think that was it. Then I had an idea: LEGO heads printed with multiple faces, for the sake of animation! Turns out they already do that. Wait, a better idea! Several LEGO heads, printed with several phonemes so you can make your brickfilms look like they're talking! That... seems like something only a hardcore brickfilmer would bother buying.
And then I got Snapchat, and I had a bunch of ideas all at once. See, Snapchat has something called Lenses. Basically, the app detects your face and then does something weird to it; one Lens will detect the "eye" area and then redraw the face so your eyes appear huge. Another app can swap your face for a face from a photo in your inventory, even going so far as to animate the mouth moving.
So, what if we applied this kind of technology to LEGO? First, you produce a quality stop-motion application. Forward and backwards adjustable onion skinning, variable framerate...
Then you introduce the Snapchat technology. You print a LEGO head with some detectable characteristic. Probably an array of simple machine-readable green-screen dots, which can be easily filled in while maintaining the light/shadow balance of your original film. Whatever it ends up being, we'll call it the d-face, for "detectable face". While you're filming, the app gives you that little square to let you know it's tracking the d-face. Then, once you've completed an animation with the d-face detection on, you can go into the FACE ANIMATION menu.
There would be a couple faces in this app. Basic simple face, face with painted red lips, cocky smirking face, skull face, etcetera. Each face has a gamut of emotions. In a launch, you'd probably want four emotions: happy, sad, angry, surprised. There are ways to automate transitioning between emotions, or even between faces. These faces would get overlaid on the d-face, snapping to them and rotating in space, much like some of the more complicated Lenses do in snapchat.
"But... couldn't that be accomplished with two heads printed with each face? Brickfilms are already natively inflexible, so the audience will grant the animator a bit of leeway with facial expressions. Why would I want this?"
Good question, surprisingly astute imaginary reader. Now, using that earlier Snapchat tech, while you're creating a face, you could record dialog with the front-facing camera active. Snapchat can detect a fair bit of mouth movement, which they would then translate into phonemic animations on your character, creating a smoother, more accurate speaking animation.
Final idea, pretty crazy town: wave-marking. In iMovie, you can pull the audio out of a video clip, which gives you the wave form for the audio. We'd use the same technology to give you a visual "mark" to deliver your animation to. Notate your audio so you can use a set number of frames to get to a particular sound. Helpful for animating to music!
And then I got Snapchat, and I had a bunch of ideas all at once. See, Snapchat has something called Lenses. Basically, the app detects your face and then does something weird to it; one Lens will detect the "eye" area and then redraw the face so your eyes appear huge. Another app can swap your face for a face from a photo in your inventory, even going so far as to animate the mouth moving.
So, what if we applied this kind of technology to LEGO? First, you produce a quality stop-motion application. Forward and backwards adjustable onion skinning, variable framerate...
Then you introduce the Snapchat technology. You print a LEGO head with some detectable characteristic. Probably an array of simple machine-readable green-screen dots, which can be easily filled in while maintaining the light/shadow balance of your original film. Whatever it ends up being, we'll call it the d-face, for "detectable face". While you're filming, the app gives you that little square to let you know it's tracking the d-face. Then, once you've completed an animation with the d-face detection on, you can go into the FACE ANIMATION menu.
There would be a couple faces in this app. Basic simple face, face with painted red lips, cocky smirking face, skull face, etcetera. Each face has a gamut of emotions. In a launch, you'd probably want four emotions: happy, sad, angry, surprised. There are ways to automate transitioning between emotions, or even between faces. These faces would get overlaid on the d-face, snapping to them and rotating in space, much like some of the more complicated Lenses do in snapchat.
"But... couldn't that be accomplished with two heads printed with each face? Brickfilms are already natively inflexible, so the audience will grant the animator a bit of leeway with facial expressions. Why would I want this?"
Good question, surprisingly astute imaginary reader. Now, using that earlier Snapchat tech, while you're creating a face, you could record dialog with the front-facing camera active. Snapchat can detect a fair bit of mouth movement, which they would then translate into phonemic animations on your character, creating a smoother, more accurate speaking animation.
Final idea, pretty crazy town: wave-marking. In iMovie, you can pull the audio out of a video clip, which gives you the wave form for the audio. We'd use the same technology to give you a visual "mark" to deliver your animation to. Notate your audio so you can use a set number of frames to get to a particular sound. Helpful for animating to music!
No comments:
Post a Comment