3D animation is a process where characters or objects are created as moving images. Rather than traditional flat or 2d characters, these 3D animation images give the impression of being able to move around characters and observe them from all angles, just like real life. 3D animation technology is relatively new and if done by hand would take thousands of hours to complete one short section of moving film. The employment of computers and software has simplified and accelerated the 3D animation process. As a result, the number of 3D animators as well as the use of 3D animation technology has increased.
What will I learn in a 3D animation program?
Your 3D animation program course content depends largely upon the 3D animation program in which you enroll. Some programs allow you to choose the courses that you are interested in. Other 3D animation programs are more structured and are intended to train you for a specific career or 3D animation role within the industry. You may learn about character creation, storyboarding, special effects, image manipulation, image capture, modeling, and various computer aided 3D animation design packages. Some 3D animation courses may cover different methods of capturing and recreating movement for 3D animation. You may learn "light suit" technology, in which suits worn by actors pinpoint the articulation of joints by a light. This if filmed from various different angles and used to replicate animated movement in a realistic way).
What skills will I need for a 3D animation program?
You'll need to have both creativity and attention to detail for a career in 3D animation. There are many talented individuals who are attracted by a 3D animation career, so you'll need to stand out from the rest. There are plenty of jobs within 3D animation requiring teamwork, as each person needs to contribute skills toward the final product. Clearly, you would benefit from having at least a passing familiarity with computers and graphics before starting your 3D animation program.
From Wikipedia, the free encyclopedia
Jump to: navigation, search For the 1983 album by Jon Anderson, see Animation (album).
The bouncing ball animation (below) consists of these 6 frames.
This animation moves at 10 frames per second. Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways. The most common method of presenting animation is as a motion picture or video program, although several other forms of presenting animation also exist.
1 Early examples 2 Techniques o 2.1 Traditional animation o 2.2 Stop motion o 2.3 Computer animation 2.3.1 2D animation 2.3.2 3D animation 18.104.22.168 Terms o 2.4 Other animation techniques o 2.5 Other techniques and approaches 3 See also 4 References 5 Further reading 6 External links
 Early examples
Main article: History of animation
An Egyptian burial chamber mural, approximately 4000 years old, showing wrestlers in action. Even though this may appear similar to a series of animation drawings, there was no way of viewing the images in motion. It does, however, indicate the artist's intention of depicting motion. Early examples of attempts to capture the phenomenon of motion drawing can be found in paleolithic cave paintings, where animals are depicted with multiple legs in superimposed positions, clearly attempting to convey the perception of motion. A 5,200 year old earthen bowl found in Iran in Shahr-i Sokhta has five images of a goat painted along the sides. This has been claimed to be an example of early animation. However, since no equipment existed to show the images in motion, such a series of images cannot be called animation in a true sense of the word. The phenakistoscope, praxinoscope, as well as the common flip book were early popular animation devices invented during the 1800s, while a Chinese zoetrope-type device was invented already in 180 AD. These devices produced movement from sequential drawings using technological means, but animation did not really develop much further until the advent of cinematography. There is no single person who can be considered the "creator" of the art of film animation, as there were several people doing several projects which could be considered various types of animation all around the same time. Georges Méliès was a creator of special-effect films; he was generally one of the first people to use animation with his technique. He discovered a technique by accident which was to stop the camera rolling to change something in the scene, and then continue rolling the film. This idea was later known as stop-motion animation. Méliès discovered this technique accidentally when his camera broke down while shooting a bus driving by. When he had fixed the camera, a hearse happened to be passing by just as Méliès restarted rolling the film, his end result was that he had managed to make a bus transform into a hearse. This was just one of the great contributors to animation in the early years. The earliest surviving stop-motion advertising film was an English short by Arthur Melbourne-Cooper called Matches: An Appeal (1899). Developed for the Bryant and May
Matchsticks company, it involved stop-motion animation of wired-together matches writing a patriotic call to action on a blackboard. J. Stuart Blackton was possibly the first American filmmaker to use the techniques of stopmotion and hand-drawn animation. Introduced to filmmaking by Edison, he pioneered these concepts at the turn of the 20th century, with his first copyrighted work dated 1900. Several of his films, among them The Enchanted Drawing (1900) and Humorous Phases of Funny Faces (1906) were film versions of Blackton's "lightning artist" routine, and utilized modified versions of Méliès' early stop-motion techniques to make a series of blackboard drawings appear to move and reshape themselves. 'Humorous Phases of Funny Faces' is regularly cited as the first true animated film, and Blackton is considered the first true animator.
Fantasmagorie by Emile Cohl, 1908 Another French artist, Émile Cohl, began drawing cartoon strips and created a film in 1908 called Fantasmagorie. The film largely consisted of a stick figure moving about and encountering all manner of morphing objects, such as a wine bottle that transforms into a flower. There were also sections of live action where the animator‟s hands would enter the scene. The film was created by drawing each frame on paper and then shooting each frame onto negative film, which gave the picture a blackboard look. This makes Fantasmagorie the first animated film created using what came to be known as traditional (hand-drawn) animation. Following the successes of Blackton and Cohl, many other artists began experimenting with animation. One such artist was Winsor McCay, a successful newspaper cartoonist, who created detailed animations that required a team of artists and painstaking attention for detail. Each frame was drawn on paper; which invariably required backgrounds and characters to be redrawn and animated. Among McCay's most noted films are Little Nemo (1911), Gertie the Dinosaur (1914) and The Sinking of the Lusitania (1918). The production of animated short films, typically referred to as "cartoons", became an industry of its own during the 1910s, and cartoon shorts were produced to be shown in movie theaters. The most successful early animation producer was John Randolph Bray, who, along with animator Earl Hurd, patented the cel animation process which dominated the animation industry for the rest of the decade.
 Traditional animation
Main article: Traditional animation
An example of traditional animation, a horse animated by rotoscoping from Eadweard Muybridge's 19th century photos. Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, which are first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one onto motion picture film against a painted background by a rostrum camera. The traditional cel animation process became obsolete by the beginning of the 21st century. Today, animators' drawings and the backgrounds are either scanned into or drawn directly into a computer system. Various software programs are used to color the drawings and simulate camera movement and effects. The final animated piece is output to one of several delivery media, including traditional 35 mm film and newer media such as digital video. The "look" of traditional cel animation is still preserved, and the character animators' work has remained essentially the same over the past 70 years. Some animation producers have used the term "tradigital" to describe cel animation which makes extensive use of computer technology. Examples of traditionally animated feature films include Pinocchio (United States, 1940), Animal Farm (United Kingdom, 1954), and Akira (Japan, 1988). Traditional animated films which were produced with the aid of computer technology include The Lion King (US, 1994) Sen to Chihiro no Kamikakushi (Spirited Away) (Japan, 2001), Treasure Planet (USA, 2002) and Les Triplettes de Belleville (2003).
Full animation refers to the process of producing high-quality traditionally animated films, which regularly use detailed drawings and plausible movement. Fully animated films can be done in a variety of styles, from realistically designed works such as
those produced by the Walt Disney studio (Beauty and the Beast, Aladdin, Lion King) to the more "cartoony" styles of those produced by the Warner Bros. animation studio (Iron Giant, Quest for Camelot, Cats Don't Dance). Many of the Disney animated features are examples of full animation, as are non-Disney works such as The Secret of NIMH (US, 1982) and The Iron Giant (US, 1999), Nocturna (Spain, 2007) Limited animation involves the use of less detailed and/or more stylized drawings and methods of movement. Pioneered by the artists at the American studio United Productions of America, limited animation can be used as a method of stylized artistic expression, as in Gerald McBoing Boing (US, 1951), Yellow Submarine (UK, 1968), and much of the anime produced in Japan. Its primary use, however, has been in producing cost-effective animated content for media such as television (the work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web cartoons). Some examples are; Spongebob Squarepants (USA, 1999-present), The Fairly OddParents (USA, 2001-present) and Invader Zim (USA, 2001-2006). Rotoscoping is a technique, patented by Max Fleischer in 1917, where animators trace live-action movement, frame by frame. The source film can be directly copied from actors' outlines into animated drawings, as in The Lord of the Rings (US, 1978), used as a basis and inspiration for character animation, as in most Disney films, or used in a stylized and expressive manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are; Ralf Bakshi's The Lord of the Rings (USA, 1978), Fire and Ice (USA, 1983) and Heavy Metal (1981). Live-action/animation is a technique, when combining hand-drawn characters into live action shots. One of the earlier uses of it was Koko the Clown when Koko was drawn over live action footage. Other examples would include Who Framed Roger Rabbit? (USA, 1988), Space Jam (USA, 1996) and Osmosis Jones (USA, 2002). Anime is a technique primarily used in Japan but originated in USA. It usually consists of detailed characters but more of a stiff animation. mouth moments primarily use 2-3 frames, leg moments use about 6-10, etc. A lot of the time the eyes are very detailed, so sometimes instead of the animator drawing them over again in every frame, two eyes will be drawn in 5-6 angles and pasted on each frame(modern times uses computer for that). Some example of Anime films are; Spirited Away (Japan, 2001), Akira (Japan, 1988) and Princess Mononoke.
 Stop motion
Main article: Stop Motion
A clay animation scene from a TV commercial. Stop-motion animation is used to describe animation created by physically manipulating realworld objects and photographing them one frame of film at a time to create the illusion of movement. There are many different types of stop-motion animation, usually named after the type of media used to create the animation. Computer software is widely available to create this type of animation.
Puppet animation typically involves stop-motion puppet figures interacting with each other in a constructed environment, in contrast to the real-world interaction in model animation. The puppets generally have an armature inside of them to keep them still and steady as well as constraining them to move at particular joints. Examples include The Tale of the Fox (France, 1937), Nightmare Before Christmas (US, 1993), Corpse Bride (US, 2005), Coraline (US, 2009), the films of Jiří Trnka and the TV series Robot Chicken (US, 2005–present). o Puppetoon, created using techniques developed by George Pál, are puppetanimated films which typically use a different version of a puppet for different frames, rather than simply manipulating one existing puppet. Clay animation, or Plasticine animation often abbreviated as claymation, uses figures made of clay or a similar malleable material to create stop-motion animation. The figures may have an armature or wire frame inside of them, similar to the related puppet animation (below), that can be manipulated in order to pose the figures. Alternatively, the figures may be made entirely of clay, such as in the films of Bruce Bickford, where clay creatures morph into a variety of different shapes. Examples of clay-animated works include The Gumby Show (US, 1957–1967) Morph shorts (UK, 1977–2000), Wallace and Gromit shorts (UK, as of 1989), Jan Švankmajer's Dimensions of Dialogue (Czechoslovakia, 1982), The Trap Door (UK, 1984). Films include Wallace and Gromit: Curse of the Were-Rabbit, Chicken Run and The Adventures of Mark Twain Cutout animation is a type of stop-motion animation produced by moving 2dimensional pieces of material such as paper or cloth. Examples include Terry Gilliam's animated sequences from Monty Python's Flying Circus (UK, 1969-1974); Fantastic Planet (France/Czechoslovakia, 1973) ; Tale of Tales (Russia, 1979), The pilot episode of the TV series (and sometimes in episodes) of South Park (US, 1997). o Silhouette animation is a variant of cutout animation in which the characters are backlit and only visible as silhouettes. Examples include The Adventures of Prince Achmed (Weimar Republic, 1926) and Princes et princesses (France, 2000). Model animation refers to stop-motion animation created to interact with and exist as a part of a live-action world. Intercutting, matte effects, and split screens are often employed to blend stop-motion characters or objects with live actors and settings. Examples include the work of Ray Harryhausen, as seen in films such Jason and the Argonauts (1961), and the work of Willis O'Brien on films such as King Kong (1933 film). o Go motion is a variant of model animation which uses various techniques to create motion blur between frames of film, which is not present in traditional stop-motion. The technique was invented by Industrial Light & Magic and Phil Tippett to create special effects scenes for the film The Empire Strikes Back (1980).
Object animation refers to the use of regular inanimate objects in stop-motion animation, as opposed to specially created items. One example of object animation is the brickfilm, which incorporates the use of plastic toy construction blocks such as Lego. o Graphic animation uses non-drawn flat visual graphic material (photographs, newspaper clippings, magazines, etc.) which are sometimes manipulated frame-by-frame to create movement. At other times, the graphics remain stationary, while the stop-motion camera is moved to create on-screen action. Pixilation involves the use of live humans as stop motion characters. This allows for a number of surreal effects, including disappearances and reappearances, allowing people to appear to slide across the ground, and other such effects. Examples of pixilation include The Secret Adventures of Tom Thumb and Angry Kid shorts.
 Computer animation
Main article: Computer animation
A short gif animation Computer animation encompasses a variety of techniques, the unifying factor being that the animation is created digitally on a computer.  2D animation 2D animation figures are created and/or edited on the computer using 2D bitmap graphics or created and edited using 2D vector graphics. This includes automated computerized versions of traditional animation techniques such as of tweening, morphing, onion skinning and interpolated rotoscoping. Examples: Foster's Home for Imaginary Friends, El Tigre: The Adventures of Manny Rivera,Waltz with Bashir
Analog computer animation
Flash animation PowerPoint animation
 3D animation 3D animation digital models manipulated by an animator. In order to manipulate a mesh, it is given a digital skeletal structure that can be used to control the mesh. This process is called rigging. Various other techniques can be applied, such as mathematical functions (ex. gravity, particle simulations), simulated fur or hair, effects such as fire and water and the use of Motion capture to name but a few, these techniques fall under the category of 3d dynamics. Many 3D animations are very believable and are commonly used as Visual effects for recent movies.
Photo Realistic Animation, is uses primarily for animation that is wanting to resemble real life, Using advanced rendering that makes detailed skin, plants, water, fire, clouds, ect to mimic real life. Examples include Up (2009, USA), Kung-Fu Panda, Ice Age (2002, USA). Cel-shaded animation, used used to mimic traditional animation using CG software. Shading looked shark and less blending colors. Examples include, Skyland (2007, France), Appleseed (2007, Japan), The Legend of Zelda: Wind Waker (2002, Japan) Motion capture, is used when live action actors were special suites that allows computers to copy there movements into CG characters. Examples include Polar Express (2004, USA), Beowulf, 2007), Avatar (2009, USA).
2D animation techniques tend to focus on image manipulation while 3D techniques usually build virtual worlds in which characters and objects move and interact. 3D animation can create images that seem real to the viewer.
 Other animation techniques
Drawn on film animation: a technique where footage is produced by creating the images directly on film stock, for example by Norman McLaren, Len Lye and Stan Brakhage. Paint-on-glass animation: a technique for making animated films by manipulating slow drying oil paints on sheets of glass. Pinscreen animation: makes use of a screen filled with movable pins, which can be moved in or out by pressing an object onto the screen. The screen is lit from the side so that the pins cast shadows. The technique has been used to create animated films with a range of textural effects difficult to achieve with traditional cel animation. Sand animation: sand is moved around on a backlighted or frontlighted piece of glass to create each frame for an animated film. This creates an interesting effect when animated because of the light contrast. Flip book: A flip book (sometimes, especially in British English, flick book) is a book with a series of pictures that vary gradually from one page to the next, so that when the pages are turned rapidly, the pictures appear to animate by simulating motion or some other change. Flip books are often illustrated books for children, but may also be geared towards adults and employ a series of photographs rather than drawings. Flip books are not always separate books, but may appear as an added
feature in ordinary books or magazines, often in the page corners. Software packages and websites are also available that convert digital video files into custom-made flip books.
 Other techniques and approaches
Character animation Chuckimation Multi-sketching Special effects animation Animatronics Stop motion
From Wikipedia, the free encyclopedia
Jump to: navigation, search For other uses, see Animatronics (disambiguation). This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (April 2009) This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2009)
Animatronics is the use of electronics and robotics in mechanised puppets to simulate life. Animatronics are mainly used in moviemaking, but also in theme parks and other forms of entertainment. Its main advantages to CGI and stop motion is that it is not a simulation of reality, but are physical objects moving in real time in front of the camera. The technology behind animatronics has become more advanced and sophisticated over the years, making the puppets even more realistic and lifelike. Garner Holt Productions, Inc. of San Bernardino, California; UCFab International, LLC of Apopka, Florida; Sally Corporation in Jacksonville, Florida; and Lifeformations of Bowling Green, Ohio are among the leaders in manufacturing animatronics for the theme park industry as well as for museums, restaurants, retail establishments and many other themed environments. Animatronics for film and television productions are used to perform action on camera in situations where the action involves creatures that do not exist, the action is too risky or costly to use real actors or animals, or the action could never be obtained with a living person or animal. The application of animatronics today includes computer controlled as well as radio and manually controlled devices. The actuation of specific movements can be obtained with electric motors, pneumatic cylinders, hydraulic cylinders and cable driven mechanisms.
The type of mechanism employed is dictated by the character parameters, specific movement requirements and the project constraints. The technology has advanced to the point that animatronic puppets can be made indiscernible from their living counterparts.
We can begin by considering things in the most general terms. Imagine placing a camera in a room and taking a photograph. The view through the lens of the camera is captured as a 2D image on the surface of the film. Light reflecting off of objects in the room will make it to the surface of the film only if it is visible within the camera's field of view. The rendering process in 3D computer graphics may seem to be a virtual analogy to our photograph. This is true to an extent, but the process of "seeing" (which is what rendering amounts to) requires quite a bit of careful definition. First of all, the objects in the 3D scene are necessarily defined by locations in a 3D measurement system that we call a coordinate space. The location of each light source and each vertex on each polygonal mesh must be specified in some coordinate system common to the whole scene. The camera itself has a position in this coordinate system, which we call world space, and can therefore be assigned a precise location in (x,y,z). In order to "see" through the virtual camera, we must treat the location of the camera as the center of the coordinate system to be used for rendering. That means transforming the location of every coordinate in world space to a value based on its position relative to the camera. The current position of the camera becomes (0,0,0) in this camera space used to perform the rendering. With every vertex and light source properly transformed to camera space, we need to determine what objects are visible to the camera. This raises two distinct issues. A surface may not be visible because it is not within the camera's field of view. Imagine a pyramidshaped volume extending forward from the location of the camera. If a surface of an object is not within this viewing volume, it is not seen by the camera. But a surface could be within the viewing volume and still not be seen by the camera because it is blocked by another surface closer to the camera. Thus any rendering process must be able to figure out what geometric objects are within the viewing volume, and must also be able to figure out whether they are obscured by other surfaces within that volume. Only then can we know precisely what surfaces are exposed to the rendering eye.
From Wikipedia, the free encyclopedia
Jump to: navigation, search This article is about computer modeling within an artistic medium. For scientific usage, see Computer simulation.
3D computer graphics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
3D models / Computer-aided design
Graphic design / Video games
Visual effects / Visualization
Virtual engineering / Virtual reality
CGI / Animation / 3D display
Wireframe model / Texture mapping
Computer animation / Motion capture
Skeletal animation / Crowd simulation
Global illumination / Volume rendering
This box: view • talk
In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D Printing devices. Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting.
1 Models o 1.1 Representation 2 Modeling processes 3 Scene setup 4 Compared to 2D methods 5 3D model market 6 See also 7 References 8 External links
3D models represent a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned.
3D models are widely used anywhere in 3D graphics. Actually, their use predates the widespread use of 3D graphics on personal computers. Many computer games used prerendered images of 3D models as sprites before computers could render them in real-time. Today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs. The movie industry uses them as characters and objects for animated and real-life motion pictures. The video game industry uses them as assets for computer and video games. The science sector uses them as highly detailed models of chemical compounds. The architecture industry uses them to demonstrate proposed buildings and landscapes through Software Architectural Models. The engineering community uses them as designs of new devices, vehicles and structures as well as a host of other uses. In recent decades the earth science community has started to construct 3D geological models as a standard practice.
A modern render of the iconic Utah teapot model developed by Martin Newell (1975). The Utah teapot is one of the most common models used in 3D graphics education. Almost all 3D models can be divided into two categories.
Solid - These models define the volume of the object they represent (like a rock). These are more realistic, but more difficult to build. Solid models are mostly used for nonvisual simulations such as medical and engineering simulations, for CAD and specialized visual applications such as ray tracing and constructive solid geometry Shell/boundary - these models represent the surface, e.g. the boundary of the object, not its volume (like an infinitesimally thin eggshell). These are easier to work with than solid models. Almost all visual models used in games and film are shell models.
Because the appearance of an object depends largely on the exterior of the object, boundary representations are common in computer graphics. Two dimensional surfaces are a good analogy for the objects used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years. Level sets are a useful representation for deforming surfaces which undergo many topological changes such as fluids. The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down
from abstract representations ("primitives") such as spheres, cones etc, to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering. Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
 Modeling processes
There are five popular ways to represent a model:
Polygonal modeling - Points in 3D space, called vertices, are connected by line segments to form a polygonal mesh. Used, for example, by 3DS Max. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons. NURBS modeling - NURBS Surfaces are defined by spline curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point will pull the curve closer to that point. NURBS are truly smooth surfaces, not approximations using small flat surfaces, and so are particularly suitable for organic modeling. Maya and Rhino 3d are the most well-known commercial software that uses NURBS natively. Splines & Patches modeling - Like NURBS, Splines and Patches depend on curved lines to define the visible surface. Patches fall somewhere between NURBS and polygons in terms of flexibility and ease of use. Primitives modeling - This procedure takes geometric primitives like balls, cylinders, cones or cubes as building blocks for more complex models. Benefits are quick and easy construction and that the forms are mathematically defined and thus absolutely precise, also the definition language can be much simpler. Primitives modeling is well suited for technical applications and less for organic shapes. Some 3D software can directly render from primitives (like POV-Ray), others use primitives only for modeling and convert them to meshes for further operations and rendering. Sculpt modeling - Still fairly new method of modeling 3D sculpting has become very popular in the few short years it has been around. There are 2 types of this currently, Displacement which is the most widely used among applications at this moment, and volumetric. Displacement uses a dense model (often generated by Subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of a 32bit image map that stores the adjusted locations. Volumetric which is based loosely on Voxels has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Both of these methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high resolution mesh information transferred into displacement data or normal map data if for a game engine.
The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including:
constructive solid geometry implicit surfaces
Modeling can be performed by means of a dedicated program (e.g., form•Z, Maya, 3DS Max, Blender, Lightwave, Modo) or an application component (Shaper, Lofter in 3DS Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D). Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats, or sprites assign to them.
 Scene setup
Scene setup involves arranging virtual objects, lights, cameras and other entities on a scene which will later be used to produce a still image or an animation. Lighting is an important aspect of scene setup. As is the case in real-world scene arrangement, lighting is a significant contributing factor to the resulting aesthetic and visual quality of the finished work. As such, it can be a difficult art to master. Lighting effects can contribute greatly to the mood and emotional response effected by a scene, a fact which is well-known to photographers and theatrical lighting technicians. It is usually desirable to add color to a model's surface in a user controlled way prior to rendering. Most 3D modeling software allows the user to color the model's vertices, and that color is then interpolated across the model's surface during rendering. This is often how models are colored by the modeling software while the model is being created. The most common method of adding color information to a 3D model is by applying a 2D texture image to the model's surface through a process called texture mapping. Texture images are no different than any other digital image, but during the texture mapping process, special pieces of information (called texture coordinates or UV coordinates) are added to the model that indicate which parts of the texture image map to which parts of the 3D model's surface. Textures allow 3D models to look significantly more detailed and realistic than they would otherwise. Other effects, beyond texturing and lighting, can be done to 3D models to add to their realism. For example, the surface normals can be tweaked to affect how they are lit, certain surfaces can have bump mapping applied and any other number of 3D rendering tricks can be applied. 3D models are often animated for some uses. They can sometimes be animated from within the 3D modeler that created them or else exported to another program. If used for animation, this phase usually makes use of a technique called "keyframing", which facilitates creation of complicated movement in the scene. With the aid of keyframing, one needs only to choose where an object stops or changes its direction of movement, rotation, or scale, between which states in every frame are interpolated. These moments of change are known as keyframes. Often extra data is added to the model to make it easier to animate. For example, some 3D models of humans and animals have entire bone systems so they will look realistic when they move and can be manipulated via joints and bones, in a process known as skeletal animation.
 Compared to 2D methods
A fully textured and lit rendering of a 3d model. 3D Photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Advantages of wireframe 3D modeling over exclusively 2D methods include:
Flexibility, ability to change angles or animate images with quicker rendering of the changes; Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating; Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect.
Disadvantages compare to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain hyperrealistic effects. Some hyperrealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model.
 3D model market
3CT (3D Catalog Technology) has revolutionized the 3D model market by offering quality 3D model libraries free of charge for professionals using various CAD programs. This uprising technology is gradually eroding the traditional "buy and sell" or "object for object exchange" markets. A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) still exists - either for individual models or large collections. Online marketplaces for 3D content allow individual artists to sell content that they have created. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These
marketplaces typically split the sale between themselves and the artist that created the asset, often in a roughly 50-50 split. In most cases, the artist retains ownership of the 3d model; the customer only buys the right to use and present the model.[
From Wikipedia, the free encyclopedia
Jump to: navigation, search
3D computer graphics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
3D models / Computer-aided design
Graphic design / Video games
Visual effects / Visualization
Virtual engineering / Virtual reality
CGI / Animation / 3D display
Wireframe model / Texture mapping
Computer animation / Motion capture
Skeletal animation / Crowd simulation
Global illumination / Volume rendering
This box: view • talk
A 3D scanner is a device that analyzes a real-world object or environment to collect data on its shape and possibly its appearance (i.e. color). The collected data can then be used to construct digital, three dimensional models useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games. Other common applications of this technology include industrial design, orthotics and prosthetics, reverse engineering and prototyping, quality control/inspection and documentation of cultural artifacts. Many different technologies can be used to build these 3D scanning devices; each technology comes with its own limitations, advantages and costs. It should be remembered that many limitations in the kind of objects that can be digitized are still present: for example optical technologies encounter many difficulties with shiny, mirroring or transparent objects. There are however methods for scanning shiny objects, such as covering them with a thin layer of white powder that will help more light photons to reflect back to the scanner. Laser scanners can send trillions of light photons toward an object and only receive a small percentage of those photons back via the optics that they use. The reflectivity of an object is based upon the object's color or terrestrial albedo. A white surface will reflect lots of light and a black surface will reflect only a small amount of light. Transparent objects such as glass will only refract the light and give false three dimensional information.
2 Technology o 2.1 Contact o 2.2 Non-contact active 2.2.1 Time-of-flight 2.2.2 Triangulation 2.2.3 Notes on time-of-flight and triangulation scanners 2.2.4 Conoscopic holography 2.2.5 Handheld laser 2.2.6 Structured light 2.2.7 Modulated light 2.2.8 Computed tomography 2.2.9 Microtomography 2.2.10 Magnetic resonance imaging o 2.3 Non-contact passive 2.3.1 Stereoscopic 2.3.2 Photometric 2.3.3 Silhouette 2.3.4 User assisted (image-based modeling) 3 Reconstruction, or Modeling o 3.1 From point clouds o 3.2 From a group of 2D slices 4 Applications o 4.1 Material processing and production o 4.2 Construction industry and civil engineering o 4.3 Benefits of 3D scanning o 4.4 Entertainment o 4.5 Reverse engineering o 4.6 Cultural Heritage 4.6.1 Michelangelo 4.6.2 Monticello 4.6.3 Cuneiform tablets 4.6.4 “Plastico di Roma antica” o 4.7 Dental CAD/CAM o 4.8 Orthotics CAD/CAM o 4.9 Quality Assurance / Industrial Metrology 5 See also 6 References 7 External links
The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If color information is collected at each point, then the colors on the surface of the subject can also be determined. 3D scanners are very analogous to cameras. Like cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not
obscured. While a camera collects color information about surfaces within its field of view, 3D scanners collect distance information about surfaces within its field of view. The “picture” produced by a 3D scanner describes the distance to a surface at each point in the picture. If a spherical coordinate system is defined in which the scanner is the origin and the vector out from the front of the scanner is φ=0 and θ=0, then each point in the picture is associated with a φ and θ. Together with distance, which corresponds to the r component, these spherical coordinates fully describe the three dimensional position of each point in the picture, in a local coordinate system relative to the scanner. For most situations, a single scan will not produce a complete model of the subject. Multiple scans, even hundreds, from many different directions are usually required to obtain information about all sides of the subject. These scans have to be brought in a common reference system, a process that is usually called alignment or registration, and then merged to create a complete model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline.
The two types of 3D scanners are contact and non-contact. Non-contact 3D scanners can be further divided into two main categories, active scanners and passive scanners. There are a variety of technologies that fall under each of these categories.
Contact 3D scanners probe the subject through physical touch. A CMM (coordinate measuring machine) is an example of a contact 3D scanner. It is used mostly in manufacturing and can be very precise. The disadvantage of CMMs though, is that it requires contact with the object being scanned. Thus, the act of scanning the object might modify or damage it. This fact is very significant when scanning delicate or valuable objects such as historical artifacts. The other disadvantage of CMMs is that they are relatively slow compared to the other scanning methods. Physically moving the arm that the probe is mounted on can be very slow and the fastest CMMs can only operate on a few hundred hertz. In contrast, an optical system like a laser scanner can operate from 10 to 500 kHz. Other examples are the hand driven touch probes used to digitize clay models in computer animation industry.
 Non-contact active
Active scanners emit some kind of radiation or light and detect its reflection in order to probe an object or environment. Possible types of emissions used include light, ultrasound or x-ray.  Time-of-flight
This lidar scanner may be used to scan buildings, rock formations, etc., to produce a 3D model. The lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically. The laser beam is used to measure the distance to the first object on its path. The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser rangefinder. The laser rangefinder finds the distance of a surface by timing the round-trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is timed. Since the speed of light c is a known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. If t is the round-trip time, then distance is equal to . The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the t time: 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre. The laser rangefinder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder‟s direction of view to scan different points. The view direction of the laser rangefinder can be changed by either rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second. Time-of-flight devices are also available in a 2D configuration. This is referred to as a Timeof-flight camera.  Triangulation
Principle of a laser triangulation sensor. Two object positions are shown.
point cloud generation using triangulation with a laser stripe. The triangulation 3D laser scanner is also an active scanner that uses laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera‟s field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera‟s field of view. These three pieces of information fully determine the shape and size of the triangle and gives the location of the laser dot corner of the triangle. In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The National Research Council of Canada was among the first institutes to develop the triangulation based laser scanning technology in 1978.  Notes on time-of-flight and triangulation scanners Time-of-flight and triangulation range finders each have strengths and weaknesses that make them suitable for different situations. The advantage of time-of-flight range finders is that they are capable of operating over very long distances, on the order of kilometers. These scanners are thus suitable for scanning large structures like buildings or geographic features. The disadvantage of time-of-flight range finders is their accuracy. Due to the high speed of light, timing the round-trip time is difficult and the accuracy of the distance measurement is relatively low, on the order of millimeters. Triangulation range finders are exactly the opposite. They have a limited range of some meters, but their accuracy is relatively high. The accuracy of triangulation range finders is on the order of tens of micrometers.
Time of flight scanners accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanners position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second. At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some time-of-flight scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimize vibration. Using these scanners to scan objects in motion is very difficult. Recently, there has been research on compensating for distortion from small amounts of vibration. When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensator built into them to counteract any movement of the scanner during the scan process.  Conoscopic holography In a Conoscopic system, a laser beam is projected onto the surface and then the immediate reflection along the same ray-path are put through a conoscopic crystal and projected onto a CCD. The result is a diffraction pattern, that can be frequency analyzed to determine the distance to the measured surface. The main advantage with Conoscopic Holography is that only a single ray-path is needed for measuring, thus giving an opportunity to measure for instance the depth of a finely drilled hole.  Handheld laser Handheld laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a handheld device and a sensor (typically a charge-coupled device or position sensitive device) measures the distance to the surface. Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined. The position can be determined by the scanner using refence features on the surface being scanned (typically adhesive reflective tabs) or by using an external tracking method. External tracking often takes the form of a laser tracker (to provide the sensor position) with integrated camera (to determine the orientation of the scanner) or a photogrammetric solution using 3 or more cameras providing the complete Six degrees of freedom of the scanner. Both techniques tend to use infrared Light-emitting diodes attached to the scanner which are seen by the camera(s) through filters providing resilience to ambient lighting.
Data is collected by a computer and recorded as data points within Three-dimensional space, with processing this can be converted into a triangulated mesh and then a Computer-aided design model, often as Nonuniform rational B-spline surfaces. Hand-held laser scanners can combine this data with passive, visible-light sensors - which capture surface textures and colors - to build (or "reverse engineer") a full 3D model.  Structured light Main article: Structured-light 3D scanner Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern may be one dimensional or two dimensional. An example of a one dimensional pattern is a line. The line is projected onto the subject using either an LCD projector or a sweeping laser. A camera, offset slightly from the pattern projector, looks at the shape of the line and uses a technique similar to triangulation to calculate the distance of every point on the line. In the case of a single-line pattern, the line is swept across the field of view to gather distance information one strip at a time. An example of a two-dimensional pattern is a grid or a line stripe pattern. A camera is used to look at the deformation of the pattern, and an algorithm is used to calculate the distance at each point in the pattern. Consider an array of parallel vertical laser stripes sweeping horizontally across a target. In the simplest case, one could analyze an image and assume that the left-to-right sequence of stripes reflects the sequence of the lasers in the array, so that the leftmost image stripe is the first laser, the next one is the second laser, and so on. In nontrivial targets having holes, occlusions, and rapid depth changes, however, this sequencing breaks down as stripes are often hidden and may even appear to change order, resulting in laser stripe ambiguity. This problem can be solved using algorithms for multistripe laser triangulation. Structured-light scanning is still a very active area of research with many research papers published each year. The advantage of structured-light 3D scanners is speed. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. This reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time. A real-time scanner a using digital fringe projection and phase-shifting technique (a various structured light method) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second. Recently, another scanner is developed. Different patterns can be applied to this system. The frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands.  Modulated light Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a sinusoidal pattern. A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light traveled. Modulated light also allows the scanner to ignore light from sources other than a laser, so there is no interference.
 Computed tomography Computed tomography (CT) is a medical imaging method which generates a threedimensional image of the inside of an object from a large series of two-dimensional X-ray images. CT produces a volume of data which can be manipulated, reformatted in various planes, or even as volumetric (3D) representations of structures. Although most common in medicine, CT is also used in other fields, such as nondestructive materials testing, reverse engineering, or to study biological and paleontological specimens  Microtomography Microtomography, like Computed tomography, scanners use x-rays to create cross-sections of a 3D-object that later can be used to recreate a virtual model without destroying the original model. The term micro is used to indicate that the pixel sizes of the cross-sections are in the micrometer range. These pixel sizes have also resulted in the terminology Microcomputed tomography, Micro-CT, X-ray tomographic microscopy, XMT, etc. All of these names generally represent the same class of instruments. These scanners are typically used for small animals, biomedical samples, foams, composites, foods, microfossils, and other studies for which minute detail is desired. In recent years the concept of Nanotomography (Nano-CT) has been introduced which scans in the nanometer range.  Magnetic resonance imaging Magnetic resonance imaging (MRI) is primarily a medical imaging technique most commonly used in radiology to visualize the internal structure and function of the body. MRI provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. Unlike CT, it uses no ionizing radiation, but uses a powerful magnetic field to align the nuclear magnetization of (usually) hydrogen atoms in water in the body.
 Non-contact passive
Passive scanners do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most scanners of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infrared could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware.  Stereoscopic Stereoscopic systems usually employ two video cameras, slightly apart, looking at the same scene. By analyzing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on human stereoscopic vision.  Photometric
Photometric systems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel.  Silhouette These types of 3D scanners use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These silhouettes are extruded and intersected to form the visual hull approximation of the object. With these kinds of techniques some kind of concavities of an object (like the interior of a bowl) are not detected.  User assisted (image-based modeling) There are other methods that, based on the user assisted detection and identification of some features and shapes on a set of different pictures of an object are able to build an approximation of the object itself. This kind of techniques are useful to build fast approximation of simple shaped objects like buildings. Various commercial packages are available like iModeller, D-Sculptor or RealViz-ImageModeler. This sort of 3D scanning is based on the principles of photogrammetry. It is also somewhat similar in methodology to panoramic photography, except that the photos are taken of one object on a three-dimensional space in order to replicate it instead of taking a series of photos from one point in a three-dimensional space in order to replicate the surrounding environment.
 Reconstruction, or Modeling
 From point clouds
The point clouds produced by 3D scanners are usually not used directly, although for simple visualization and measurement in the architecture and construction world, points may suffice. Most applications instead use polygonal 3D models, NURBS surface models, or editable feature-based CAD models (aka Solid models). The process of converting a point cloud into a usable 3D model in any of the forms described above is called 'reconstruction or "modeling"'. Polygon mesh models In a polygonal representation of a shape, a curved surface is modeled as many small faceted flat surfaces (think of a sphere modeled as a disco ball). Polygon models -also called Mesh models, are useful for visualization, for some CAM (i.e., machining), but are generally "heavy" ( i.e., very large data sets), and are relatively un-editable in this form. Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface. Many applications, both free and non free, are available for this purpose (eg. MeshLab, kubit PointCloud for AutoCAD, photomodeler, imagemodel, PolyWorks, Rapidform, Geomagic, Imageware, Rhino, etc.). Surface models
The next level of sophistication in modeling involves using a quilt of curved surface patches to model our shape. These might be NURBS, TSplines or other curved representations of curved topology. Using NURBS, our sphere is a true mathematical sphere. Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout. These patches have the advantage of being lighter and more manipulable when exported to CAD. Surface models are somewhat editable, but only in a sculptural sense of pushing and pulling to deform the surface. This representation lends itself well to modeling organic and artistic shapes. Providers of surface modelers include Rapidform, Geomagic, Rhino, Maya, T Splines etc. Solid CAD models From an engineering/manufacturing perspective, the ultimate representation of a digitized shape is the editable, parametric CAD model. After all, CAD is the common "language" of industry to describe, edit and maintain the shape of the enterprise's assets. In CAD, our sphere is described by parametric features which are easily edited by changing a value(e.g., centerpoint and radius). These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the center of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model. Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD(e.g., Geomagic, Imageware, Rhino). Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact, yielding a complete, native CAD model, capturing both shape and design intent (e.g. Rapidform). Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment(e.g., Catia).
 From a group of 2D slices
CT, MRI, or Micro-CT scanners do not produce point clouds but a set of 2D slices which are then 'stacked together' to produce a 3D representation. There are several ways to do this depending on the output required: Volume rendering Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various different thresholds, allowing different colors to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object. Image segmentation Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the
unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation. Image-based meshing When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data.
 Material processing and production
Laser scanning describes a method where a surface is sampled or scanned using laser technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Lasers with low power are used when the scanned surface doesn't have to be influenced, e.g. when it has to be digitized. Confocal or 3D laser scanning are methods to get information about the scanned surface. Depending on the power of the laser, its influence on a working piece differs: lower power values are used for laser engraving, where material is partially removed by the laser. With higher powers the material becomes fluid and laser welding can be realized, or if the power is high enough to remove the material completely, then laser cutting can be performed. Also for rapid prototyping a laser scanning procedure is used when for example a prototype is generated by laser sintering. The principle that is used for all these applications is the same: software that runs on a PC or an embedded system and that controls the complete process is connected with a scanner card. That card converts the received vector data to movement information which is sent to the scanhead. This scanhead consists of two mirrors that are able to deflect the laser beam in one level (X- and Y-coordinate). The third dimension is - if necessary - realized by a specific optic that is able to move the laser's focal point in the depth-direction (Z-axis). The third dimension is needed for some special applications like the rapid prototyping where an object is built up layer by layer or for in-glass-marking where the laser has to influence the material at specific positions within it. For these cases it is important that the laser has as small a focal point as possible. For enhanced laser scanning applications and/or high material throughput during production, scanning systems with more than one scanhead are used. Here the software has to control what is done exactly within such a multihead application: it is possible that all available heads have to mark the same to finish processing faster or that the heads mark one single job in parallel where every scanhead performs a part of the job in case of large working areas. Structured light projection systems are also used for solar cell flatness metrology enabling stress calculation with throughput in excess of 2000 wafers per hour. 
 Construction industry and civil engineering
As-built drawings of Bridges, Industrial Plants, and Monuments Documentation of historical sites Site modeling and lay outing Quality control Quantity Surveys Freeway Redesign Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire. Create GIS (Geographic information system) maps and Geomatics.
 Benefits of 3D scanning
3D model scanning could benefit the design process if:
Increase effectiveness working with complex parts and shapes. Help with design of products to accommodate someone else‟s part. If CAD models are outdated, a 3D scan will provide an updated version Replacement of missing or older parts
3D scanners are used by the entertainment industry to create digital 3D models for both movies and video games. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer.
 Reverse engineering
Reverse engineering of a mechanical component requires a precise digital model of the objects to be reproduced. Rather than a set of points a precise digital model can be represented by a polygon mesh, a set of flat or curved NURBS surfaces, or ideally for mechanical components, a CAD solid model. A 3D scanner can be used to digitize free-form or gradually changing shaped components as well as prismatic geometries whereas a coordinate measuring machine is usually used only to determine simple dimensions of a highly prismatic model. These data points are then processed to create a usable digital model, usually using specialized reverse engineering software (see Modeling; Solid Models, above).
 Cultural Heritage
An example of real object replication by means of 3D scanning and 3D printing There have been many research projects undertook the scanning of historical sites and artifacts both for documentation and analysis purposes. The combined use of 3D scanning and 3D printing technologies allows the replication of real objects without the use of traditional plaster casting techniques, that in many cases can be too invasive for being performed on precious or delicate cultural heritage artifacts. In the side figure the gargoyle model on the left was digitally acquired by using a 3D scanner and the produced 3D data was processed using MeshLab. The resulting digital 3D model, shown in the screen of the laptop, was used by a rapid prototyping machine to create a real resin replica of original object.  Michelangelo In 1999, two different research groups started scanning Michelangelo's statues. Stanford University with a group led by Marc Levoy used a custom laser triangulation scanner built by Cyberware to scan Michelangelo‟s statues in Florence, notably the David, the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo‟s chisel marks. These detailed scans produced a huge amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from IBM, led by H. Rushmeier and F. Bernardini scanned the Pietà of Florence acquiring both geometric and color details.  Monticello In 2002, David Luebke, et al. scanned Thomas Jefferson‟s Monticello. A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with color data from digital photographs to create the Virtual Monticello, and the Jefferson‟s Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson‟s Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarized projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson‟s Library. The Jefferson‟s Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson‟s Cabinet  Cuneiform tablets
In 2003, Subodh Kumar, et al. undertook the 3D scanning of ancient cuneiform tablets. Again, a laser triangulation scanner was used. The tablets were scanned on a regular grid pattern at a resolution of 0.025 mm.  “Plastico di Roma antica” In 2005, Gabriele Guidi, et al. scanned the “Plastico di Roma antica”, a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model.
 Dental CAD/CAM
Many Chairside dental CAD/CAM systems and Dental Laboratory CAD/CAM systems use 3D Scanner technologies to capture the 3D surface of a dental preparation (either in vivo or in vitro), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration (such as a Crown, Onlay, Inlay or Veneer).
 Orthotics CAD/CAM
Many orthotists also use 3D scanners in order to capture the 3D shape of a patient. It gradually supplants tedious plaster cast. CAD/CAM software are then used to design and manufacture the orthosis or prosthesis.
 Quality Assurance / Industrial Metrology
The digitalization of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (Computer Aided Design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably. Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitized as well. For this purpose, 3D scanners are applied to generate point samples from the object‟s surface which are finally compared against the nominal data . The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterms on molds and
tooling, determining accuracy of final build, analyzing gap and flush, or analyzing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option.
From Wikipedia, the free encyclopedia
Jump to: navigation, search
3D computer graphics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
3D models / Computer-aided design
Graphic design / Video games
Visual effects / Visualization
Virtual engineering / Virtual reality
CGI / Animation / 3D display
Wireframe model / Texture mapping
Computer animation / Motion capture
Skeletal animation / Crowd simulation
Global illumination / Volume rendering
This box: view • talk
3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images with 3D photorealistic effects on a computer.
1 Rendering methods o 1.1 Real-time o 1.2 Non real-time o 1.3 Reflection and shading models 1.3.1 Reflection 1.3.2 Shading o 1.4 Transport o 1.5 Projection 2 See also 3 External links 4 References
 Rendering methods
Main article: Rendering (computer graphics)
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.
An example of a ray-traced image that typically takes seconds or minutes to render. Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a 30th of a second (or one frame, in the case of 30 frame-per-second animation). The goal here is primarily speed and not photo-realism. In fact, here exploitations are made in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera. This is the basic method employed in games, interactive worlds, VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.
 Non real-time
Computer-generated image created by Gilles Tran. Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement. When the goal is photo-realism, techniques are employed such as ray tracing or radiosity. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin). The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.
 Reflection and shading models
Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied almost exclusively within the context of rendering. Modern 3D computer graphics rely heavily on a simplified reflection model called Phong reflection model (not to be confused with Phong shading). In refraction of light, an important concept is the refractive index. In most 3D programming implementations, the term for this value is "index of refraction," usually
abbreviated "IOR." Shading can be broken down into two orthogonal issues, which are often studied independently:
Reflection/Scattering - How light interacts with the surface at a given point Shading - How material properties vary across the surface
The Utah teapot Reflection or scattering is the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. Popular reflection rendering techniques in 3D computer graphics include:
Flat shading: A technique that shades each polygon of an object based on the polygon's "normal" and the position and intensity of a light source. Gouraud shading: Invented by H. Gouraud in 1971, a fast and resource-conscious vertex shading technique used to simulate smoothly shaded surfaces. Texture mapping: A technique for simulating a large amount of surface detail by mapping images (textures) onto polygons. Phong shading: Invented by Bui Tuong Phong, used to simulate specular highlights and smooth shaded surfaces. Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate wrinkled surfaces. Cel shading: A technique used to imitate the look of hand-drawn animation.
 Shading Shading addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.) A simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a surface, giving it more apparent detail.
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Perspective Projection The shaded three-dimensional objects must be flattened so that the display device - namely a monitor - can display it in only two dimensions, this process is called 3D projection. This is done using projection and, for most applications, perspective projection. The basic idea behind perspective projection is that objects that are further away are made smaller in relation to those that are closer to the eye. Programs produce perspective by multiplying a dilation constant raised to the power of the negative of the distance from the observer. A dilation constant of one means that there is no perspective. High dilation constants can cause a "fisheye" effect in which image distortion begins to occur. Orthographic projection is used mainly in CAD or CAM applications where scientific modeling requires precise measurements and preservation of the third dimension.
 See also
Wikipedia:Books has a book on: 3D Rendering
3D printing is a form of additive manufacturing technology where a three dimensional object is created by successive layers of material. 3D printers are generally faster, more affordable and easier to use than other additive manufacturing technologies. 3D printers offer product developers the ability to print parts and assemblies made of several materials with different mechanical and physical properties in a single build process. Advanced 3D printing technologies yield models that closely emulate the look, feel and functionality of product prototypes. In recent years 3D printers have become financially accessible to small and medium sized business, thereby taking prototyping out of the heavy industry and into the office environment. It is now also possible to simultaneously deposit different types of materials. While rapid prototyping dominates current uses, 3D printers offer tremendous potential for production applications as well. The technology also finds use in the jewellery, footwear, industrial design, architecture, automotive, aerospace, dental and medical industries.
1 Technologies 2 Resolution 3 Applications 4 RepRap 5 Advantages 6 Equipment 7 Prototyping technologies and their base materials 8 See also 9 References 10 Bibliography 11 External links
Previous means of producing a prototype typically took person-hours, many tools, and skilled labor. For example, after a new street light luminaire was digitally designed, drawings were sent to skilled craftspeople where the design on paper was painstakingly followed and a three-dimensional prototype was produced in wood by utilizing an entire shop full of expensive wood working machinery and tools. This typically was not a speedy process and costs of the skilled labor were not cheap. Hence the need to develop a faster and cheaper process to produce prototypes. As an answer to this need, rapid prototyping was born. One variation of 3D printing consists of an inkjet printing system. Layers of a fine powder (plaster, corn starch, or resins) are selectively bonded by "printing" an adhesive from the inkjet printhead in the shape of each cross-section as determined by a CAD file. This technology is the only one that allows for the printing of full colour prototypes. It is also recognized as the fastest method. Alternately, these machines feed liquids, such as photopolymer, through an inkjet-type printhead to form each layer of the model. These Photopolymer Phase machines use an ultraviolet (UV) flood lamp mounted in the print head to cure each layer as it is deposited. Fused deposition modeling (FDM), a technology also used in traditional rapid prototyping, uses a nozzle to deposit molten polymer onto a support structure, layer by layer. Another approach is selective fusing of print media in a granular bed. In this variation, the unfused media serves to support overhangs and thin walls in the part being produced, reducing the need for auxiliary temporary supports for the workpiece. Typically a laser is used to sinter the media and form the solid. Examples of this are SLS (Selective Laser Sintering) and DMLS (Direct Metal Laser Sintering), using metals. Finally, ultra-small features may be made by the 3D microfabrication technique of 2-photon photopolymerization. In this approach, the desired 3D object is traced out in a block of gel by a focused laser. The gel is cured to a solid only in the places where the laser was focused, due
to the nonlinear nature of photoexcitation, and then the remaining gel is washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures such as moving and interlocked parts. Each technology has its advantages and drawbacks, and consequently some companies offer a choice between powder and polymer as the material from which the object emerges. Generally, the main considerations are speed, cost of the printed prototype, cost of the 3D printer, choice of materials, colour capabilities, etc. Unlike "traditional" additive systems such as stereolithography, 3D printing is optimized for speed, low cost, and ease-of-use, making it suitable for visualizing during the conceptual stages of engineering design when dimensional accuracy and mechanical strength of prototypes are less important. No toxic chemicals like those used in stereolithography are required, and minimal post printing finish work is needed; one need only brush off surrounding powder after the printing process. Bonded powder prints can be further strengthened by wax or thermoset polymer impregnation. FDM parts can be strengthened by wicking another metal into the part.
Resolution is given in layer thickness and X-Y resolution in dpi. Typical layer thickness is around 100 micrometres (0.1 mm), while X-Y resolution is comparable to that of laser printers. The particles (3D dots) are around 50 to 100 micrometres (0.05-0.1 mm) in diameter.
An example of real object replication by means of 3D scanning and 3D printing: the gargoyle model on the left was digitally acquired by using a 3D scanner and the produced 3D data was processed using MeshLab. The resulting digital 3D model, shown on the laptop's screen, was used by a rapid prototyping machine to create a real resin replica of the original object Standard applications include design visualization, prototyping/CAD, metal casting, architecture, education, geospatial, healthcare, entertainment/retail, etc. Other applications would include reconstructing fossils in paleontology, replicating ancient and priceless artifacts in archaeology, reconstructing bones and body parts in forensic pathology and reconstructing heavily damaged evidence acquired from crime scene investigations.
More recently, the use of 3D printing technology for artistic expression has been suggested. Artists have been using 3d printers in various ways. 3D printing technology is currently being studied by biotechnology firms and academia for possible use in tissue engineering applications where organs and body parts are built using inkjet techniques. Layers of living cells are deposited onto a gel medium and slowly built up to form three dimensional structures. Several terms have been used to refer to this field of research: Organ printing, bio-printing, and computer-aided tissue engineering among others. 3D printing allows to manufacture a personalised hip replacement in one pass, with the ball permanently inside the socket, and even at current printing resolutions the unit will not require polishing. The use of 3D scanning technologies allow the replication of real objects without the use of molding techniques, that in many cases can be more expensive, more difficult, or too invasive to be performed; particularly with precious or delicate cultural heritage artifacts. Future applications may allow many of the familiar pieces of furniture in a contemporary home to be replaced by the combination of a 3D printer and a recycling unit. Clothing, crockery, cutlery and books can already all be printed on demand and recycled after use, meaning that wardrobes, washing machines, dishwashers, cupboards and bookshelves may eventually become redundant.
Main article: RepRap Project RepRap is a project released under the GNU General Public License which aims to produce an open source self-replicating rapid prototyper; that is, a 3D printer which can print a copy of itself. It can currently only print plastic parts. Research is underway that will let it print circuit boards as well as details in metal. The creator said about the printer that "We want to make sure that everything is open, not just the design and the software you control it with, but the entire tool-chain, from the ground up." 
On the fly modeling enables the creation of prototypes that closely emulate the mechanical properties of the target design Some technologies allow the combination of black and white rigid materials in order to create a range of grayscales suitable for consumer electronics and other applications Save time and cost by removing the need to design, print and „glue together‟ separate model parts made with different materials in order to create a complete model.
A large number of competing technologies are available in the marketplace. As all are additive technologies, their main differences are found in the way layers are built to create parts. Some methods use melting or softening material to produce the layers (SLS, FDM)
where others lay liquid materials thermodynamics sets that are cured with different technologies. In the case of lamination systems, thin layers are cut to shape and joined together.
 Prototyping technologies and their base materials
1. 2. 3. 4. 5. 6. Selective laser sintering (SLS): Thermoplastics, metals, sand Fused Deposition Modeling (FDM): Thermoplastics Stereolithography (SL): Photopolymer Lamination systems: Paper and plastic Electron Beam Melting (EBM): Titanium alloys 3D Printing (3DP): Various materials
 See also
3D microfabrication Contour Crafting Desktop manufacturing Digital fabricator Direct digital manufacturing List of emerging technologies Polyjet matrix Objet Geometries Rapid prototyping Self-replicating machine Solid freeform fabrication Stereolithography
From Wikipedia, the free encyclopedia
Jump to: navigation, search This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2008)
See also: Computer-generated imagery
An example of computer animation which is produced in the "motion capture" technique Computer animation (or CGI animation) is the art of creating moving images with the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (computer-generated imagery or computer-generated imaging), especially when used in films. To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image that is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures. Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations. For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered. For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.
1 A simple example 2 Explanation 3 Methods of animating virtual characters 4 Computer animation development equipment 5 The future 6 Detailed examples and pseudocode 7 Movies 8 Amateur animation 9 Architectural animation 10 See also o 10.1 Animated images in Wikipedia 11 External links
 A simple example
Computer animation example The screen is blanked to a background color, such as black. Then a goat is drawn on the right of the screen. Next the screen is blanked, but the goat is re-drawn or duplicated slightly to the left of its original position. This process is repeated, each time moving the goat a bit to the left. If this process is repeated fast enough the goat will appear to move smoothly to the left. This basic procedure is used for all moving pictures in films and television. The moving goat is an example of shifting the location of an object. More complex transformations of object properties such as size, shape, lighting effects and color often require calculations and computer rendering instead of simple re-drawing or duplication.
To trick the eye and brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second (frame/s) or faster (a frame is one complete image). With rates above 70 frames/s no improvement in realism or smoothness is perceivable due to the way the eye and brain process images. At rates below 12 frame/s most people can detect jerkiness associated with the drawing of new images which detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames/s in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. Because it produces more realistic imagery computer animation demands higher frame rates to reinforce this realism. The reason no jerkiness is seen at higher speeds is due to “persistence of vision.” From moment to moment, the eye and brain working together actually store whatever one looks at
for a fraction of a second, and automatically "smooth out" minor jumps. Movie film seen in theaters in the United States runs at 24 frames per second, which is sufficient to create this illusion of continuous movement.
 Methods of animating virtual characters
In this .gif of a 2D Flash animation, each 'stick' of the figure is keyframed over time to create motion. In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, analogous to a skeleton or stick figure. The position of each segment of the skeletal model is defined by animation variables, or Avars. In human and animal characters, many parts of the skeletal model correspond to actual bones, but skeletal animation is also used to animate other things, such as facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 700 Avars, including 100 Avars in the face. The computer does not usually render the skeletal model directly (it is invisible), but uses the skeletal model to compute the exact position and orientation of the character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame. There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or 'tween' between them, a process called keyframing. Keyframing puts control in the hands of the animator, and has roots in hand-drawn traditional animation. In contrast, a newer method called motion capture makes use of live action. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. His or her motion is recorded to a computer using video cameras and markers, and that performance is then applied to the animated character. Each method has their advantages, and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, actor Bill Nighy provided the performance for the character Davy Jones. Even though Nighy himself doesn't appear in the film, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done through conventional costuming.
 Computer animation development equipment
Computer animation can be created with a computer and animation software. Some impressive animation can be achieved even with basic programs; however the rendering can take a lot of time on an ordinary home computer. Because of this, video game animators tend to use low resolution, low polygon count renders, such that the graphics can be rendered in real time on a home computer. Photorealistic animation would be impractical in this context. Professional animators of movies, television, and video sequences on computer games make photorealistic animation with high detail. This level of quality for movie animation would take tens to hundreds of years to create on a home computer. Many powerful workstation computers are used instead. Graphics workstation computers use two to four processors, and thus are a lot more powerful than a home computer, and are specialized for rendering. A large number of workstations (known as a render farm) are networked together to effectively act as a giant computer. The result is a computer-animated movie that can be completed in about one to five years (this process is not comprised solely of rendering, however). A workstation typically costs $2,000 to $16,000, with the more expensive stations being able to render much faster, due to the more technologically advanced hardware that they contain. Pixar's Renderman is rendering software which is widely used as the movie animation industry standard, in competition with Mental Ray. It can be bought at the official Pixar website for about $3,500. It will work on Linux, Mac OS X, and Microsoft Windows based graphics workstations along with an animation program such as Maya and Softimage XSI. Professionals also use digital movie cameras, motion capture or performance capture, bluescreens, film editing software, props, and other tools for movie animation.
 The future
This article may contain original research or unverified claims. Please improve the article by adding references. See the talk page for details. (October 2009) One open challenge in computer animation is a photorealistic animation of humans. Currently, most computer-animated movies show animal characters (A Bug's Life, Finding Nemo, Ratatouille, Newt, Ice Age, Over the Hedge), fantasy characters (Monsters Inc., Shrek, Monsters vs. Aliens), anthropomorphic machines (Cars, WALL-E, Robots) or cartoon-like humans (The Incredibles, Despicable Me, Up, Jimmy Neutron: Boy Genius, Meet the Robinsons, Cloudy with a Chance of Meatballs). The movie Final Fantasy: The Spirits Within is often cited as the first computer-generated movie to attempt to show realisticlooking humans. However, due to the enormous complexity of the human body, human motion, and human biomechanics, realistic simulation of humans remains largely an open problem. Another problem is the distasteful psychological response to viewing nearly perfect animation of humans, known as "the uncanny valley." It is one of the "holy grails" of computer animation. Eventually, the goal is to create software where the animator can generate a movie sequence showing a photorealistic human character, undergoing physicallyplausible motion, together with clothes, photorealistic hair, a complicated natural background, and possibly interacting with other simulated human characters. This could be done in a way that the viewer is no longer able to tell if a particular movie sequence is computer-generated, or created using real actors in front of movie cameras. Complete human realism is not likely to happen very soon, however such concepts obviously bear certain philosophical implications for the future of the film industry.
For the moment it looks like three dimensional computer animation can be divided into two main directions; photorealistic and non-photorealistic rendering. Photorealistic computer animation can itself be divided into two subcategories; real photorealism (where performance capture is used in the creation of the virtual human characters) and stylized photorealism. Real photorealism is what Final Fantasy tried to achieve and will in the future most likely have the ability to give us live action fantasy features as The Dark Crystal without having to use advanced puppetry and animatronics, while Antz is an example on stylistic photorealism (in the future stylized photorealism will be able to replace traditional stop motion animation as in Corpse Bride). None of them are as mentioned perfected yet, but the progress continues. The non-photorealistic/cartoonish direction is more like an extension of traditional animation, an attempt to make the animation look like a three dimensional version of a cartoon, still using and perfecting the main principles of animation articulated by the Nine Old Men, such as squash and stretch. While a single frame from a photorealistic computer-animated feature will look like a photo if done right, a single frame vector from a cartoonish computer-animated feature will look like a painting (not to be confused with cel shading, which produces an ever simpler look).
 Detailed examples and pseudocode
In 2D computer animation, moving objects are often referred to as “sprites.” A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocode makes a sprite move from left to right:
var int x := 0, y := screenHeight / 2; while x < screenWidth drawBackground() drawSpriteAtXY (x, y) // draw on top of the background x := x + 5 // move to the right
Modern (2001) computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three dimensional polygons, apply “textures”, lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution. Let's step through the rendering of a simple image of a room with flat wood walls with a grey pyramid in the center of the room. The pyramid will have a spotlight shining on it. Each wall, the floor and the ceiling is a simple polygon, in this case, a rectangle. Each corner of the rectangles is defined by three values referred to as X, Y and Z. X is how far left and right the point is. Y is how far up and down the point is, and Z is far in and out of the screen the point is. The wall nearest us would be defined by four points: (in the order x, y, z). Below is a representation of how the wall is defined
(0, 10, 0) (0,0,0) (10, 10, 0) (10, 0, 0)
The far wall would be:
(0, 10, 20) (0, 0, 20) (10, 10, 20) (10, 0, 20)
The pyramid is made up of five polygons: the rectangular base, and four triangular sides. To draw this image the computer uses math to calculate how to project this image, defined by three dimensional data, onto a two dimensional computer screen. First we must also define where our view point is, that is, from what vantage point will the scene be drawn. Our view point is inside the room a bit above the floor, directly in front of the pyramid. First the computer will calculate which polygons are visible. The near wall will not be displayed at all, as it is behind our view point. The far side of the pyramid will also not be drawn as it is hidden by the front of the pyramid. Next each point is perspective projected onto the screen. The portions of the walls „furthest‟ from the view point will appear to be shorter than the nearer areas due to perspective. To make the walls look like wood, a wood pattern, called a texture, will be drawn on them. To accomplish this, a technique called “texture mapping” is often used. A small drawing of wood that can be repeatedly drawn in a matching tiled pattern (like wallpaper) is stretched and drawn onto the walls' final shape. The pyramid is solid grey so its surfaces can just be rendered as grey. But we also have a spotlight. Where its light falls we lighten colors, where objects blocks the light we darken colors. Next we render the complete scene on the computer screen. If the numbers describing the position of the pyramid were changed and this process repeated, the pyramid would appear to move.
CGI short films have been produced as independent animation since 1976, though the popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation. The first completely computer-generated television series was ReBoot, and the first completely computer-generated animated movie was Toy Story (1995). See List of computer-animated films for more.
 Amateur animation
The popularity of sites such as Newgrounds, which allows members to upload their own movies for others to view, has created a growing number of what is often considered amateur computer animators. With many free utilities available and programs such as Windows Movie Maker or iMovie, which are included in the operating system, anyone with the tools and a creative mind can have their animation viewed by thousands. Many high end animation software options are also available on a trial basis, allowing for educational and noncommercial development with certain restrictions. Several free and open source animation software applications exist as well, Blender as an example. One way to create amateur animation is using the GIF format, which can be uploaded and seen on the web easily.
 Architectural animation
Architects use services from animation companies to create a 3-dimensional models for both the customers and builders. It can be more accurate than traditional drawings. Architectural animation can also be used to see the possible relationship the building will have in relation to the environment and its surrounding buildings.
Animation Computer-generated imagery (CGI) Ray Tracing Computer Graphics Lab DreamWorks Animation SKG National Centre for Computer Animation (UK) Wire frame model Virtual artifact Computer representation of surfaces Motion capture Avar (animation variable) Pixar Animation Studios Computer Animation Training Rhythm and Hues Studios Skeletal animation Morph target animation Timeline of CGI in film and television List of computer-animated films Blue Sky Studios Hand Over Walsh Family Media
 Animated images in Wikipedia
Computer animation example An animated pentakisdodecahedron Animation of an MRI brain scan, starting at the top of the head and moving towards the base