HIGH RESOLUTION ANIMATED SCENES FROM STILLS

HIGH RESOLUTION ANIMATED SCENES FROM STILLS

Current techniques for generating animated scenes involve either videos (whose resolution is limited) or a single image (which requires a significant amount of user interaction). We describe a system that allows the user to quickly and easily produce a compelling-looking animation from a small collection of high resolution stills. Our system has two unique features. First, it applies an automatic partial temporal order recovery algorithm to the stills in order to approximate the original scene dynamics. The output sequence is subsequently extracted using a second-order Markov Chain model. Second, a region with large motion variation can be automatically decomposed into semiautonomous regions such that their temporal orderings are softly constrained. This is to ensure motion smoothness throughout the original region. The final animation is obtained by frame interpolation and feathering. Our system also provides a simple-to-use interface to help the user to fine-tune the motion of the animated scene. Using our system, an animated scene can be generated in minutes. We show results for a variety of scenes.

Existing System:

A single picture conveys a lot of information about the scene, but it rarely conveys the scene’s true dynamic nature. A video effectively does both but is limited in resolution. Off-the-shelf camcorders can capture videos with a resolution of 720 _ 480 at 30 fps, but this resolution
pales in comparison to those for consumer digital cameras, whose resolution can be as high as 16 MPixels.
What if we wish to produce a high resolution animated scene that reasonably reflects the true dynamic nature of the scene? Video textures are the perfect solution for producing arbitrarily long video sequences—if only very high resolution camcorders exist.
A system is capable of generating compelling-looking animated scenes, but there is a major drawback: Their system requires a considerable amount of manual input. Furthermore, since the animation is specified completely manually, it might not reflect the true scene dynamics.

Proposed System:

We describe a scene animation system that can easily generate a video or video texture from a small collection of stills (typically, 10 to 20 stills are captured within 1 to 2 minutes, depending on the complexity of the scene motion). Our system first builds a graph that links similar images.
It then recovers partial temporal orders among the input images and uses a second-order Markov Chain model to generate an image sequence of the video or video texture (Fig. 1). Our system is designed to allow the user to easily fine-tune the animation. For example, the user has the option to manually specify regions where animation occurs independently (which we term independent animated regions (IAR)) so that different time instances of each IAR can be used independently.
An IAR with large motion variation can further be automatically decomposed into semi-independent animated regions (SIARs) in order to make the motion appear more natural. The user also has the option to modify the dynamics (e.g., speed up or slow down the motion, or choose different motion parameters) through a simple interface.
Finally, all regions are frame interpolated and feathered at their boundaries to produce the final animation. The user needs only a few minutes of interaction to finish the whole process. In our work, we limit our scope to quasi-periodic motion, i.e., dynamic textures.

Software Requirements:
.Net
Front End – ASP.Net
Language – C#.Net
Back End – SQL Server
Windows XP

Hardware Requirements:
RAM : 512 Mb
Hard Disk : 80 Gb
Processor : Pentium IV


Comments are closed.