From Wikipedia, the free encyclopedia
Further information: Animation and Computer-generated imagery
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.(November 2009)
An example of computer animation which is produced in the "motion capture" technique
Computer animation is the process used for generating animated images by usingcomputer graphics. The more general term computer generated imageryencompasses both static scenes and dynamic images, while computer animationonly refers to moving images. Modern computer animation usually uses 3D computer graphics, although2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. Computer animation is essentially a digital successor to the stop motion techniques used in traditional animation with 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image that is similar to it, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered. For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For prerecorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the endusers computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.
1 A simple example 2 Explanation 3 History 4 Methods of animating virtual characters 5 Creating characters and objects on a computer 6 Computer animation development equipment 7 Modeling human faces 8 The future 9 Detailed examples and pseudocode 10 Movies 11 Amateur animation 12 See also o 12.1 Animated images in Wikipedia
13 References 14 External links
Computer animation example
The screen is blanked to a background color, such as black. Then, a goat is drawn on the right of the screen. Next, the screen is blanked, but the goat is re-drawn or duplicated slightly to the left of its original position. This process is repeated, each time moving the goat a bit to the left. If this process is repeated fast enough, the goat will appear to move smoothly to the left. This basic procedure is used for all moving pictures in films and television. The moving goat is an example of shifting the location of an object. More complex transformations of object properties such as size, shape, lighting effects often require calculations and computer rendering instead of simple re-drawing or duplication.
To trick the eye and brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second (frame/s) or faster (a frame is one complete image). With rates above 70 frames/s no improvement in realism or smoothness is perceivable due to the way the eye and brain process images. At rates below 12 frame/s most people can detect jerkiness associated with the drawing of new images which detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames/s in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. Because it produces more realistic imagery computer animation demands higher frame rates to reinforce this realism. Movie film seen in theaters in the United States runs at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used.
Main article: History of computer animation See also: Timeline of computer animation in film and television Some of the earliest animation done using a digital computer was done at Bell Telephone Laboratories in the first half of the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll. Early digital animation was also done at Lawrence Livermore Laboratory.
Another early step in the history of computer animation was the 1973 movie Westworld, a science-fiction film about a society in which robots live and work among humans, though the first use of 3D Wireframe imagery was in its sequel, Futureworld (1976), which featured a computer-generated hand and face created by then University of Utah graduate students Edwin Catmull and Fred Parke. Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques, attended each year by tens of thousands of computer professionals. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies. This art form is called machinima. The first feature-length computer animated film was the 1995 movie Toy Story by Pixar. It followed an adventure centered around some toys and their owners. The groundbreaking film was the first of many fully computer animated films. Computer animation helped make blockbuster films such as Toy Story 3 (2010), Avatar (2009), Shrek 2 (2004), and Cars 2(2011)
of animating virtual characters
In this .gif of a 2D Flash animation, each 'stick' of thefigure is keyframedover time to create motion.
In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, analogous to a skeleton or stick figure. The position of each segment of the skeletal model is defined by animation variables, or Avars. In human and animal characters, many parts of the skeletal model correspond to actual bones, butskeletal animation is also used to animate other things, such as facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 700 Avars, including 100 Avars in the face. The computer does not usually render the skeletal model directly (it is invisible), but uses the skeletal model to compute the exact position and orientation of the character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame. There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or 'tween' between them, a process called keyframing. Keyframing puts control in the hands of the animator, and has roots in hand-drawn traditional animation.
In contrast, a newer method called motion capture makes use of live action. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. His or her motion is recorded to a computer using video camerasand markers, and that performance is then applied to the animated character. Each method has its advantages, and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, actor Bill Nighy provided the performance for the character Davy Jones. Even though Nighy himself doesn't appear in the film, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done through conventional costuming.
characters and objects on a computer
3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. Models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process called rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two. 3D models rigged for animation may contain thousands of control points - for example, the character "Woody" in Pixar's movie Toy Story, uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe which had about 1851 controllers, 742 in just the face alone. In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
animation development equipment
Computer animation can be created with a computer and animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can take a lot of time on an ordinary home computer. Because of this, video game animators tend to use low resolution, low polygon count renders, such that the graphics can be rendered in real time on a home computer. Photorealistic animation would be impractical in this context. Professional animators of movies, television, and video sequences on computer games make photorealistic animation with high detail. This level of quality for movie animation would take tens to hundreds of years to create on a home computer. Many powerful workstation computers are used instead. Graphics workstation computers use two to four processors, and thus are a lot more powerful than a home computer, and are specialized for rendering. A large number of workstations (known as a render farm) are networked together to effectively act as a giant computer. The result is a computer-animated movie that can be completed in about one to five years (this process is not comprised solely of rendering, however). A workstation typically costs $2,000 to $16,000, with the more expensive stations being able to render much faster, due to the more technologically advanced hardware that they contain. Professionals also use digital movie cameras,motion capture or performance capture, bluescreens, film editing software, props, and other tools for movie animation.
Main article: Computer facial animation The modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery.Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements, and sparked interest among a number of researchers.  The Facial Action Coding System (with 46 action units such as "lip bite" or "squint") which had been developed in 1976 became a popular basis for many systems. As early as 2001 MPEG-4 included 68 facial animation parameters for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.
In some cases, an affective space such as the PAD emotional state model can be used to assign specific emotions to the faces of avatars. In this approach the PAD model is used as a high level emotional space, and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two level structure: the PAD-PEP mapping and the PEP-FAP translation model.
One open challenge in computer animation is a photorealistic animation of humans. Currently, most computeranimated movies show animal characters (A Bug's Life, Finding Nemo, Ratatouille, Ice Age, Over the Hedge, Open Season, Rio), fantasy characters (Monsters Inc., Shrek,TMNT, Monsters vs. Aliens), anthropomorphic machines (Cars, WALL-E, Robots) or cartoon-like humans (The Incredibles, Despicable Me,Up). The movie Final Fantasy: The Spirits Within is often cited as the first computer-generated movie to attempt to show realistic-looking humans. However, due to the enormous complexity of the human body, human motion, and human biomechanics, realistic simulation of humans remains largely an open problem. Another problem is the distasteful psychological response to viewing nearly perfect animation of humans, known as "the uncanny valley." It is one of the "holy grails" of computer animation. Eventually, the goal is to create software where the animator can generate a movie sequence showing a photorealistic human character, undergoing physically plausible motion, together with clothes, photorealistic hair, a complicated natural background, and possibly interacting with other simulated human characters. This could be done in a way that the viewer is no longer able to tell if a particular movie sequence is computer-generated, or created using real actors in front of movie cameras. Complete human realism is not likely to happen very soon, but when it does it may have major repercussions for the film industry. For the moment it looks like three dimensional computer animation can be divided into two main directions; photorealistic and non-photorealistic rendering. Photorealistic computer animation can itself be divided into two subcategories; real photorealism (where performance capture is used in the creation of the virtual human characters) and stylized photorealism. Real photorealism is what Final Fantasy tried to achieve and will in the future most likely have the ability to give us live action fantasy features as The Dark Crystal without having to use advanced puppetry and animatronics, while Antz is an example on stylistic photorealism (in the future stylized photorealism will be able to replace traditional stop motion animation as in Corpse Bride, Coraline, Nightmare Before Christmas). None of these mentioned are perfected as of yet, but the progress continues. The non-photorealistic/cartoonish direction is more like an extension of traditional animation, an attempt to make the animation look like a three dimensional version of a cartoon, still using and perfecting the main principles of animation articulated by the Nine Old Men, such as squash and stretch.
While a single frame from a photorealistic computer-animated feature will look like a photo if done right, a single frame vector from a cartoonish computer-animated feature will look like a painting (not to be confused with cel shading, which produces an even simpler look). Although films such as The Polar Express and Mars Needs Moms have made steps towards realism, the uncanny valley is still present. Some recent video games however, most notably L.A. Noire, feature very convincing computer animated human faces and movement.
examples and pseudocode
In 2D computer animation, moving objects are often referred to as “sprites.” A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocodemakes a sprite move from left to right:
var int x := 0, y := screenHeight / 2; while x < screenWidth drawBackground() drawSpriteAtXY (x, y) // draw on top of the background x := x + 5 // move to the right
Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three dimensional polygons, apply “textures”, lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique calledconstructive solid geometry defines objects by conducting boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution. Let's step through the rendering of a simple image of a room with flat wood walls with a grey pyramid in the center of the room. The pyramid will have a spotlight shining on it. Each wall, the floor and the ceiling is a simple polygon, in this case, a rectangle. Each corner of the rectangles is defined by three values referred to as X, Y and Z. X is how far left and right the point is. Y is how far up and down the point is, and Z is far in and out of the screen the point is. The wall nearest us would be defined by four points: (in the order x, y, z). Below is a representation of how the wall is defined
(0, 10, 0) (0,0,0)
(10, 10, 0) (10, 0, 0)
The far wall would be:
(0, 10, 20) (0, 0, 20)
(10, 10, 20) (10, 0, 20)
The pyramid is made up of five polygons: the rectangular base, and four triangular sides. To draw this image the computer uses math to calculate how to project this image, defined by three dimensional data, onto a two dimensional computer screen. First we must also define where our view point is, that is, from what vantage point will the scene be drawn. Our view point is inside the room a bit above the floor, directly in front of the pyramid. First the computer will calculate which polygons are visible. The near wall will not be displayed at all, as it is behind our view point. The far side of the pyramid will also not be drawn as it is hidden by the front of the pyramid. Next each point is perspective projected onto the screen. The portions of the walls „furthest‟ from the view point will appear to be shorter than the nearer areas due to perspective. To make the walls look like wood, a wood pattern, called a texture, will be drawn on them. To accomplish this, a technique called “texture mapping” is often used. A small drawing of wood that can be repeatedly drawn in a matching tiled pattern (like desktop wallpaper) is stretched and drawn onto the walls' final shape. The pyramid is solid grey so its surfaces can just be rendered as grey. But we also have a spotlight. Where its light falls we lighten colors, where objects blocks the light we darken colors. Next we render the complete scene on the computer screen. If the numbers describing the position of the pyramid were changed and this process repeated, the pyramid would appear to move.
CGI short films have been produced as independent animation since 1976, though the popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation. The first completely computer-generated television series wasReBoot, in 1994, and the first completely computergenerated animated movie was Toy Story (1995). See List of computer-animated films for more.
The popularity of websites which allows members to upload their own movies for others to view has created a growing community of amateurcomputer animators. With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open source animation software applications exist as well. A popular amateur approach to animation is via the animated GIF format, which can be uploaded and seen on the web easily.
Introduction to Computer Animation
Animation has historically been produced in two ways. The first is by artists creating a succession of cartoon frames, which are then combined into a film. A second method is by using physical models, e.g. King Kong, which are positioned, the image recorded, then the model is moved, the next image is recorded, and this process is continued. Computer animation can be produced by using a rendering machine to produce successive frames wherein some aspect of the image is varied. For a simple animation this might be just moving the camera or the relative motion of rigid bodies in the scene. This is analogous to the second technique described above, i.e., using physical models. More sophisticated computer animation can move the camera and/or the objects in more interesting ways, e.g. along computed curved paths, and can even use the laws of Physics to determine the behavior of objects. Animation is used in Visualization to show the time dependent behavior of complex systems. A major part of animation is motion control. Early systems did not have the computational power to allow for animation preview and interactive control. Also, many early animators were computer scientists rather than artists. Thus, scripting systems were developed. These systems were used as a computer high level language where the animator wrote a script (program) to control the animation. Whereas a high level programming language allows for the definition of complex data types, the scripting languages allowed for the definition of "actors", objects with their own animation rules. Later systems have allowed for different types of motion control. One way to classify animation techniques is by the level of abstraction in the motion control techniques. A low-level system requires the animator to precisely specify each detail of motion, whereas a high-level system would allow them to use more general or abstract methods. For example, to move a simple rigid object such as a cube, requires six degrees of freedom (numbers) per frame. A more complex object will have more degrees of freedom, for example a bird might have over twenty degrees of freedom. Now think about animating an entire flock of birds. Therefore, a Control Hierarchy is required, so that high level control constructs can be specified which are then mapped into more detailed control constructs.
This is analogous to high level computer languages with complex control structures or data types which are translated at runtime into low level constructs.
Types of Animation Systems
Scripting Systems were the earliest type of motion control systems. The animator writes a script in the animation language. Thus, the user must learn this language and the system is not interactive. One scripting system is ASAS (Actor Script Animation Language), which has a syntax similar to LISP. ASAS introduced the concept of an actor, i.e., a complex object which has its own animation rules. For example, in animating a bicycle, the wheels will rotate in their own coordinate system and the animator doesn't have to worry about this detail. Actors can communicate with other actors be sending messages and so can synchronize their movements. This is similar to the behavior of objects in object-oriented languages.
Procedures are used that define movement over time. These might be procedures that use the laws of physics (Physically - based modeling) or animator generated methods. An example is a motion that is the result of some other action (this is called a "secondary action"), for example throwing a ball which hits another object and causes the second object to move.
This technique allows an object to change its shape during the animation. There are three subcategories to this. The first is the animation of articulated objects, i.e., complex objects composed of connected rigid segments. The second is soft object animation used for deforming and animating the deformation of objects, e.g. skin over a body or facial muscles. The third is morphing which is the changing of one shape into another quite different shape. This can be done in two or three dimensions.
This uses stochastic processes to control groups of objects, such as in particle systems. Examples are fireworks, fire, water falls, etc.
Objects or "actors" are given rules about how they react to their environment. Examples are schools of fish or flocks of birds where each individual behaves according to a set of rules defined by the animator.
Last changed February 08, 2000, G. Scott Owen, [email protected]
Jurassic Park was one of the first movies to integrate computer-generated characters with live actors. Getty Images
http://entertainment.howstuffworks.com/computer-animation.htm http://entertainment.howstuffworks.com/computer-animation2.htm http://entertainment.howstuffworks.com/computer-animation3.htm http://entertainment.howstuffworks.com/computer-animation4.htm
The movie opens with a sweeping aerial shot of an alien world. As thecamera swoops downward from the clouds, a vast city emerges. Thousands of space-age vehicles whir past on an intergalactic freeway. Great skyscrapers crowd a smoggy skyline lit by a deep orange sunset. The camera continues its speedy descent to the 112th-story balcony of a steel gray apartment building, where it focuses on the pensive face of our hero, a giant lizard man named Fizzle. We know that none of this is real, but we still believe. That's the magic of modern filmmaking. Using powerful computers, animators anddigital effects artists at companies like Industrial Light & Magic can construct fictional worlds and virtual characters that are so lifelike, so convincingly real, that the audience suspends its disbelief and enjoys the show. Computer animators are artists. While their tools are high-tech, nothing replaces their creative vision. That said, over the past two decades, computers have opened up unimaginable possibilities for animators. With sophisticated modeling software and powerful computer processors, the only limit is the animator's imagination. The applications of computer animation extend far beyond film and television. Video games are at the forefront of interactive 2-D and 3-D animation. 3-D animators help design and model new products and industrial machines. In fields like medicine and engineering, 3-D animation can help simplify and visualize complex internal processes. And computer animators are in high demand for marketing and advertising campaigns. But how exactly do these magicians create people, animals, objects and landscapes out of thin air? What are the basic techniques for modeling and animating virtual creations? And how long does the process take? (Hint: much longer than you think!) Keep reading to find out.
What is Computer Animation?
To animate means "to give life to" [source: ACMSIGGRAPH]. An animator's job is to take a static image or object and literally bring it to life by giving it movement and personality. In computer animation, animators use software to draw, model and animate objects and characters in vast digital landscapes. There are two basic kinds of computer animation:computer-assisted andcomputer-generated.
Computer-assisted animation is typically two-dimensional (2-D), like cartoons [source: ACMSIGGRAPH]. The animator draws objects and characters either by hand or with a computer. Then he positions his creations in key frames, which form an outline of the most important movements. Next, the computer uses mathematical algorithms to fill in the "inbetween" frames. This process is called tweening. Key framing and tweening are traditional animation techniques that can be done by hand, but are accomplished much faster with a computer. Computer-generated animation is a different story. First of all, it's three-dimensional (3-D), meaning that objects and characters are modeled on a plane with an X, Y and Z axis. This can't be done with pencil and paper. Key framing and tweening are still an important function of computer-generated animation, but there are other techniques that don't relate to traditional animation. Using mathematical algorithms, animators can program objects to adhere to (or break) physical laws like gravity, mass and force. Or create tremendous herds and flocks of creatures that appear to act independently, yet collectively. With computer-generated animation, instead of animating each hair on a monster's head, the monster's fur is designed to wave gently in the wind and lie flat when wet. Technology has long been a part of the animator's toolkit. Animators at Disney revolutionized the industry with innovations like the use of sound in animated short films and the multi-plane camera stand that created the parallax effect of background depth [source: ACMSIGGRAPH]. The roots of computer animation began with computer graphics pioneers in the early 1960s working at major U.S. research institutes, often with government funding [source: Carnegie Mellon School of Computer Science]. Their earliest films were scientific simulations with titles like "Flow of a Viscous Fluid" and "Propagation of Shock Waves in a Solid Form" [source: Carnegie Mellon School of Computer Science]. Ed Catmull at the University of Utah was one of the first to toy with computer animation as art, beginning with a 3-D rendering of his hand opening and closing. The University of Utah was the source of the earliest important breakthroughs in 3-D computer graphics, like the hidden surface algorithm that allows a computer to conceptualize three-dimensional objects, and the Utah Teapot, a strikingly rendered 3-D teapot that signaled a turning point in the photorealistic quality of 3-D graphics [source: Carnegie Mellon School of Computer Science]. In 1973, "Westworld" became the first film to contain computer-generated 2D graphics. More films in the late 1970s and early 1980s relied on computer graphics, or CG, to create primitive effects that were designed to look computer-generated. "Tron" (1982) was ideal for showcasing undeniably digital effects since the movie took place inside a computer.
"Jurassic Park" (1993) was the first feature film to integrate convincingly real, entirely computergenerated characters into a live action film, and "Toy Story" (1995) from Pixar was the first fulllength "cartoon" made entirely with computer-generated 3-D animation [source: ACMSIGGRAPH]. The increasing sophistication and realism of 3-D animation can be directly credited to an exponential growth in computer processing power. Today, a standard desktop computer runs 5,000 times faster than those used by computer graphics pioneers in the 1960s. And the cost of the basic technology for creating computer animation has gone from $500,000 to less than $2,000 [source: PBS]. Now let's look at the basics of creating a 3-D computer-generated object.
To create a 3-D computer-generated object, you'll needmodeling software like Maya, 3ds Max or Blender. These programs come loaded with a large number of basic 3-D shapes, calledprimitives or prims, which are the building blocks of more complex objects. For example, you could model a car by connecting cubes, cylinders, pyramids and spheres of different shapes and sizes. Since these are 3D objects, they're modeled on the X, Y and Z axes and can be rotated and viewed from any angle. When you first begin to model an object, it doesn't have any surface color or texture. All you see on your screen is the object's skeleton -- the lines and outlines of the individual cubes, blocks and spheres that have been used to construct it. This is called a wireframe. Each shape that's formed by the lines of the wireframe is called a polygon. A pyramid, for example, is made up of four triangle-shaped polygons. In practice, there are several ways to create a wireframe model of an object. If you don't want to be confined to constructing objects from fixed shapes like blocks and cylinders, you can use a more free-form technique called spline-based modeling. Splines allow for objects with smooth, curved lines. Another method is to sculpt an object out of clay or some other physical material and use a 3-D scanner to create a wireframe copy of the object in the modeling software. Once you have your wireframe -- through any modeling method you choose -- you can shade its surface to see what it would look like as a 3-D object. But to make the object look more realistic, you need to add color and surface texture. This is done in something called the materials editor [source: ICOM]. Here you can play with an endless palette of colors or create your own by adjusting the red, green and blue values, and tinkering with hue and saturation. Common surface textures like wood grain, rock, metal and glass usually come with the modeling software
and can be easily applied to surfaces. You can also create image files in a program like Photoshop and wrap the image around the object like wallpaper. Lighting is perhaps the most important component for giving an object depth and realism. Modeling programs allow you to light your objects from every imaginable angle and adjust how the surface of your objects reflect or absorb light. There are three basic values that dictate how a surface responds to light:
Ambient: the color of an object's surface that's not exposed to direct light Diffuse: the color of the surface that's directly facing the light source Specular: the value that controls the reflectiveness or shininess of the surface [source: ICOM] Modeling programs are especially helpful for creating realistic looking 3D objects because they contain mathematical algorithms that replicate the natural world. For example, when you light a sphere from a certain angle, the surface reflects light in just the right way and the shadow is cast at the precise angle. These details trick the mind into thinking that this object on a twodimensional screen actually has depth and texture. Now let's look at how animators use computers to help create vast digital landscapes and realistic animated sets.
Since the earliest days of motion pictures, filmmakers have looked for ways to convincingly (and inexpensively!) recreate vast, realistic landscapes and backdrops without having to actually film on-location at the peak of Mt. Everest or the moon's surface. The most common solution is a production effect called matte painting. In traditional matte painting, artists relied on several techniques, from simply painting a huge fake backdrop (think of those old westerns with the cactus and sunset in the distance) to carefully replacing parts of a shot with scenes painted on glass. Computers have added a whole new dimension to matte paintings. Literally. Digital matte painters use a combination of source photographs, 2-D Photoshop images, 3-D modeling and 3D animation to create impressive fictional landscapes. Think of those magnificent establishing shots in the recent "Star Wars" movies, showing a sprawling intergalactic metropolis or a jungle fortress crawling with thick foliage and perched atop a raging waterfall.
For live action films, digital matte painters often get the assignment of creating a historically accurate backdrop for a scene. In "The Last Samurai," for example, the script called for Tom Cruise's character to wander out of a bar and into the streets of San Francisco, circa 1876. First, the live actors performed their scene in front of a green or blue screen. Then the digital matte painters consulted archive photos of the city to model a 3-D skyline. They took digital photos of a beautiful sunset and placed it behind their model cityscape. Then they created a computergenerated trolley that would clank down the steep street in front of the actors. (Go here for behind-the-scenes pictures and videos from Matte World Digital.) Digital matte painters use the same techniques when creating landscapes for fully animated films, like those made by Pixar. If the characters are going to interact a lot with the virtual set, then each set element is rendered in 3-D [source: Pixar]. But for large establishing shots, or an enormous backdrop that will only be seen once, the matte painters use a combination of 2D Photoshop collages and 3D models to build realistic landscapes that fit within the style of the animated film. Pixar movies, for example, have become increasingly photorealistic without losing their "cartoony" quality. So the landscapes can't look perfectly "real." They have to be built on a color and texture palette that matches the rest of the movie. Another technology that adds impressive realism to a digital landscape is something called a particle system [source: Vanderbilt University School of Engineering]. Particle systems use mathematical algorithms to recreate the natural movements of animated elements like smoke, fire and flocks of birds. For digital matte paintings, the animator doesn't have to draw every flame and every wisp of smoke as the city burns. He just uses the modeling software's particle tools to program how large he wants the flames to be and how dark and billowy the smoke should be. With the same controls, he can model one CG seagull and program the software to create a flock of birds that flap their wings at different paces to take slightly different paths as they soar across the sunset. Now let's look at character modeling and animation, the heart and soul of computer animation.
The process of creating a computer-animated character begins as it always has, with a pencil and paper. The art department submits hundreds of character sketches based on discussions with the writers and director. Once they settle on a design for a particular character, it's the animator's job to model the character in 3-D on the computer. Sometimes the art department will create a 3-D clay model of the character and then scan it into the computer to create a wireframe model.
Modeling characters isn't that different than modeling an object. The hard part is animating them. The human eye is very sensitive to unnatural or jerky movements. Walking, for example, is an extremely complicated movement that requires just about every part of the body to participate in a single, fluid motion. One solution is to build an animated character as if it had an internal skeleton. This is called an articulated model [source: Vanderbilt University School of Engineering]. Basically, the character is built upon bones and joints that act according to a hierarchy. There are joints at the top of the hierarchy -- elbows, for example -- that control the movement of body parts that are lower in the hierarchy -- upper arm, forearm and hand, in this case. In this hierarchal structure, the animator only has to move one joint or body part, and the lower joints and body parts assume their correct position, like pulling a marionette's strings. This brings us back to key framing and tweening. When animating a character, the animator only poses the character in key positions and lets the computer fill in the "in between" frames. This is made even easier by the articulated model and something called inverse kinematics [source: Vanderbilt University School of Engineering]. Let's say the animator wants to make the character raise his hand. Since all of the character's body parts are connected in a hierarchy, all the animator has to do is set a key frame with the character's hand in the desired position. The computer will not only fill in the movement of the hand, but of all the parts connected to the hand (arm, elbow, shoulder, et cetera). Animation software often comes with pre-loaded inverse kinematic models for walking and other common character movements. Another popular method for creating smooth, realistic character movements is through motion capture. With motion capture, a live actor puts on a special suit embedded with dozens of sensors. The sensors rest on key parts of the body, like limbs and joints. The computer tracks and records the movements of the sensors and can use that data in different ways. The data can be used to directly control the limbs and joints of an animated character. In this sense, the live actor is moving the animated character like a puppet, even in real time. Or the sensor data can simply be used as a guide over which a character is modeled and animated. Now let's look at the overall animation process for a feature film.
The Computer-Animation Process
Half of the process of creating a computer-animated feature film has nothing to do with computers. First, the filmmakers write a treatment, which is a rough sketch of the story. When they've settled on all of the story beats -- major plot points -- they're ready to create a storyboard of the film. The storyboard is a 2-D, comic-book-style rendering of each scene in
the movie along with some jokes and snippets of important dialogue [source: Pixar]. During the storyboarding process, the script is polished and the filmmakers can start to see how the scenes will work visually. The next step is to have the voice actors come in and record all of their lines. Using the actors' recorded dialogue, the filmmakers assemble a video animated only with the storyboard drawings. After further editing, re-writing, and re-recording of dialogue, the real animation is ready to begin. The art department now designs all the characters, major set locations, props and color palettes for the film. The characters and props are modeled in 3-D or scanned into the computers from clay models. At Pixar, each character is equipped with hundreds of avars, little hinges that allow the animators to move specific parts of the character's body. Woody from "Toy Story," for example, had over 100 avars on his face alone [source: Pixar]. The next step is to create all of the 3-D sets, painstakingly dressed with all of the details that bring the virtual world to life. Then the characters are placed on the set in a process called blocking. The director and lead animators block the key character positions and camera angles for each and every shot of the movie. Now teams of animators are each assigned short snippets of scenes. They take the blocking instructions and create their own more detailed key frames. Then they begin the tweening process. The computer handles a lot of the interpolation -- calculating the best way to tween two key frames -- but the artist often has to tweak the results so they look even more lifelike. It's common for an animator to re-do a single short animated sequence several times before the director or lead animator is satisfied [source: Pixar]. High-quality animated films are produced at a frame rate of 24 frames per second (fps). For a 90 minute film, that's nearly 130,000 frames of animation. At Pixar, for example, an individual animator is expected to produce 100 frames of animation a week [source: Pixar]. Now the characters and props are given surface texture and color. They're dressed with clothing that wrinkles and flows naturally with body movements, hair and fur that waves in the virtual breeze, and skin that looks real enough to touch. Then it's time to light the scenes, using ambient, omnidirectional and spotlights to create depth, shadows and moods. The final step of the process is called rendering. Using powerful computers, all of the digital information that the animators have created -- character models, key frames, tweens, textures, colors, sets, props, lighting, digital matte paintings, et cetera -- is assembled into a single frame of film. Even with the incredible computing power of a company like Pixar, it takes an average of
six hours to render one frame of an animated film [source: Pixar]. That's over 88 years of rendering for a 90-minute film. Good thing they can render more than one frame at a time. We hope this has been a helpful introduction to the world of computer animation. For even more information on digital filmmaking, special effects and related topics, check out the links on the next page.
Computer software programs help animators give life to creatures such as this alien Dave Hogan/Getty Images
Special effects staff at Industrial Light & Magic create many of the computer-generated images seen on film Justin Sullivan/Getty Images
Using computer software programs, animators will paint backgrounds to give a realistic appearance to a scene. Carl DeSouza/AFP/Getty Images
Animators will build molds to animate a character's movement like this one created for "Toy Story." © Ted Thai/Time Life Pictures/
Motion-capture software was used to turn actor's Andy Serkis into the creature Gollum from the 'Lord of the Rings' trilogy. Scott Gries/Getty Images