I’ve made an experimental AR app that turns drawings on paper into a controllable 2D game character. It is able to recognize a blank worksheet. In this prototype, it is trained to look for this one:
The worksheet can be printed at any size. There are 12 boxes to draw in different states for your character: 1 frame for an Idle, 7 frames for a Run, and 4 frames for a Jump sequence. Users can use any medium they want to generate what are effectively sprites on a physical sprite sheet. The app uses a mobile device’s camera (in this case an iPhone 6S+) to look for this worksheet. The PIP window in the lower right corner of the screen indicates how much of the worksheet is in view. When the entire worksheet is visible to the camera, the preview window turns green. Tapping this button takes a picture and does some sophisticated things to crop the image, neutralize the camera’s perspective, and render down to a 1024×1024 Texture2D. A predefined sprite sheet’s texture is replaced with this newly generated texture and rendered right back onto the paper.
In the lower left corner of the screen, a DPAD controls the animation. By default, the animation will play in place, on the worksheet, but this sprite could conceivably be used in a 2D/2.5D game like an endless runner. A small button in the upper right corner of the screen snaps the character to the camera and moves the sprite relative to the camera to simulate what that might feel like.
Some Take Away Thoughts:
- This approach strips away a lot of the technical sophistication associated with learning animation. You don’t need to learn how to create frames or set timecodes or visualize onion skinning. It’s focused on drawing and makes animation extremely accessible. This seems like a great tool for education.
- Mixed media experiences like this feel extremely magical. It encourages creativity and experimentation.
- As I made new characters, using the app as a “pencil test” was so easy it became a very natural part of my workflow.
- Another approach to paper-based character creation is to create a coloring book for a character’s body parts: head, arms, torso, legs. By coloring in the boundaries, the app could assemble each part into a 2D puppet that is outfitted with premade animations. The task would be specifically focused on designing the look of a character, not creating its keyframes.
- Crayola’s Color Alive books utilizes a similar technique, but maps the user’s coloring to a 3D model’s diffuse map. They have sold an entire series of products around this experience tailored to licensed properties.
- This technique could extend to map creation. The user could draw a 2D arena to be used for multiplayer games like Smash Bros or Worms/Gunbound. If the app can reliably generate an alpha mask from the drawing, we can automagically generate collision. This mesh could easily be made destructible as well.
- 2D image-based tracking is super fast and much, much more reliable than object-recognition.
- The entire page is being tracked, not just the b&w photograph at top. It does not need to be that large.
- The app is not limited to one worksheet. It is not limited to 12 frames.
- Vuforia is not licensed to distribute standalone PC/Mac builds. There are workarounds but they cost a few hundred dollars.
- Vuforia does not support WebGL for web builds. Boo!
- This technology is patent pending.