This was a piece done purely for a sound clip we received...
http://www.youtube.com/watch?v=upkeAa7Lq6o
Kelly 3D Animation/ Film
3D Animation
Wednesday, 20 July 2011
Sunday, 22 May 2011
Sunday, 15 May 2011
Facial Expressions in 3D Animation
In storytelling, a certain emotion must be raised, within the character and/ or within the audience. This emotion can be portrayed through body language and expression. Within the realm of animation, one must recognise that exaggeration of these expressions must be used to further emphasize the character’s feelings and to bring that character to life- this is part of acting. (Roberts, 160) Most emotions are involuntary but some can be voluntary, to hide something, like a fake smile to hide disdain or sadness. This is even more difficult to portray on a character in animation but if done well, the character can become a deep rounded character and may ask more of the audience to interpret (Roberts, 187).
Although characters convey certain emotions through their expressions, the amount of emotions and expressions seem endless, but there are six expressions that form the basis of emotion namely: happy, sad, anger, surprise, fear and disgust/contempt (Maestri, 196). These six can be mixed and altered for most of the other expressions. These expressions are imperative to storytelling as character analysis and understanding comes through the emotions and helps the audience connect with the feelings of these emotions. Humans also are able to understand each other through emotions and expressions as most expressions are universal.
In 3D software, one must make an easy access point for a character’s expressions, as expressions are continuously changing with each action, not drastically, but they are not static as emotions are not static. This easy access point not only allows for quick animation but a range of expression as each expression is done either by a button or slider.
Body language can tell much of a character and his mood, but the eyes are the most expressive feature of a character. The eyes lead the face and emotion and maybe the unexpressed emotion. A clear example of this is Modern Family (2009) where Claire, the most active character does not find her husband’s jokes funny and in the interview insert she reiterates her view: “…I laugh at all of his jokes, w-with my mouth. Not with my eyes.” (Lloyd ; Levitan, ep 4). After her speech a cut to her laughing and her eyes remaining untouched by humour is a clear indication of what she just said.
I have decided to review Despicable Me (2010) and the main character Gru as he seems quite similar to my current character with regards to his features (long nose, his eyes). I decided to follow the broad categories that most emotions follow, outlined by George Maestri.
Happy:
The first expression I decided to find was happy. This came in the beginning of the film as Gru walks down the street and looks around. He is not only happy but content, a prolonged happiness. His eyebrows are lifted, and relaxed, no tension rests there. His eyes are wide but the lids are relaxed and the bottom lids are up, further emphasizing the happiness. His smile is wide and more to the right of his face, giving his character an asymmetrical look, therefore portraying realism and character. Just underneath this picture is that of a fake smile. The eyebrows reveal a more curious and scared look and the nose is pulled up a bit to reveal apprehension and the mouth is awkwardly to the side and open at the end. It also differs from his natural smile in that the sides have changed from right to left (slant of smile) and his face is more asymmetrical in the fake smile showing more awkwardness in the expression.
Sad:
Here sadness is portrayed after he misses the girls’ dance recital. The sadness is in full force, the eyebrows lift up in the middle of his forehead and down at the sides, squashing the skin in the middle leaving a sad frown. The eyes are wide but the bottom lids are up, usually an indication of sadness or that he is about to tear up. They are also fixed on something indicating a real problem that has arisen. The mouth is small but angled down at the corners indicating an unhappiness of some sort.
Anger:
The next emotion is anger. Here Gru’s eyebrows are dipped down towards the middle of his face by his nose, making a prominent crease to frown whilst the ends of his eyebrows curve up and then down. This is a common indication of anger. His eyes are wide and fixed. His nostrils are flared indication a rush of air and therefore and anger building up. His mouth is tightly sealed revealing high anger and unhappiness.
Surprise!
Surprise is the next emotion, conveyed by wide eyes, almost rounded, showing shock. The eyebrows are lifted up pulled from the middle where the sides are down a little bit. The mouth is open usually rounded at the top and the teeth are apart. These are all indicators of shock and surprise, most attributes are open and left exposed (mouth, teeth, eyes, eyebrows). There can be fear as surprise does have some of fear’s attributes and vice versa.
Fear:
Fear is represented in a less open way. The eyes are wide, the eyebrows are up, (a little less than surprise, the nose is retracted and the mouth is open but more teeth are revealed (the teeth chatter image may follow). The mouth also distorts in shape a little to reveal the loss of control and fear.
Disgust:
Disgust/ contempt are two expressions and emotions that are very closely associated. Disgust is represented by close eyebrows that filter down the forehead (similar to anger’s but not as prominent. The eyes are skewed to middle, giving the face a confused yet disbelieving look. The nostrils are pulled up, similar to the stereotype of ‘snobbishness’ and the mouth are open slightly. All these contribute to the overwhelming idea of disgust a mixture of expressions from anger, surprise and disgust.
As an angry villain character, it was quite interesting to investigate a range of emotions and my findings were that when sifting through various scenes and taking screen shots, there were about three expressions or more, displayed during speech, interaction and reaction, all mixed but only one mood or emotion was conveyed. This means that the character had a range of expressions but only one emotion. This was interesting as it further emphasized the complexity of emotions and facial portrayal of these emotions. It also emphasized the importance of exaggeration and that emotion must be portrayed in various ways especially in animation, in order to created an authentic representation or convey the message clearly to the audience.
3D characters need set expressions for a range of emotions and these emotions are a great storytelling device. It is imperative that one sets up their character’s facial expressions as it is a quick access point to change the mood and expression of the character- which as I witnessed in the film Despicable Me- these changes should occur often.
Works Cited:
- · Coffin, P; C, Renaud. Despicable Me. USA, Universal Pictures. Film. 2010.
- · Lloyd, C; S Levitan. Modern Family. 20th Century Fox. Television Series. 2010.
- · Maestri, G. Digital Character Animation 3. USA, New Riders, Peachpit. Print 2006.
- · Roberts, S. Character Animation In 3D.Great Britain. Focal Press. Print. 2004.
Sunday, 08 May 2011
Character Modelling and Edge Loops
Why is it so important to construct sound models? Edge loops are particularly important under a humanoid character’s arms and on their face- why is this?
The most essential lesson to learn in modelling a character is too make sure not only that your model and geometry are neat but that there are less triangles and five sided polygons as these tend to cause problems when manipulating the mesh for animation purposes. Triangles (although all polygons are ultimately triangles) are the worst for deformation of the character, they act like spear heads and points jut out when deforming and therefore the operatic ability of the triangle is limited (Ratner, 40). The best solution is to form quads instead of other polygon types as they allow for maximum control and deformation. Low geometry is also good for a character as it allows for high definition smoothing whereas a high polygon count will allow for little smoothing capacity and will slow down the animation and render process (Unknown, 1)
Character modelling is quite a complex task consisting of a variety of formulas and methods that will produce the required results. There are two types of modelling techniques, namely polygon modelling and patch modelling. Polygon modelling refers to a subdivision surface type of modelling whereby one can choose from an existing shape or polygon and begin modelling there. Patch modelling is a more detailed and intricate process that deals with NURBS and spline-based surfaces. It is seen as building from scratch and is used for organic modelling which can allow for amazing results in facial modelling (Murdoch; Allen 37). Now with these techniques come sub-techniques/ methods that can be used with for modelling but here we shall pull focus to edge loops.
Edge loops are defined as a series of polygons connected by edges in a loop, end-to-end format. Therefore where the loop begins is where it ends and so the edge loop is a recurring flow of polygons (Murdoch; Allen, 44). Edge loops act and are defined similarly to the muscles of the human body. When creating a character the edge loops must run in the same fashion as the muscles of the face and body as this will create a natural and real deformation for animation.
“Edge loops will make deformation, motion and even texturing quite a bit easier” (Osipa, 80)
Edge loops are essential for expressions as there is a natural flow of motion in the loop, giving a natural effect of movement and deformation. Murdoch and Allen describe a couple of advantages of edge loops- edge loops allow for easy movement, and changing the position of an edge loop is quick and easy. They segregate different major features of the body and allow for even greater definition such as wrinkles. Heavily deformed/ animated areas need edge loops to stop the mesh from distorting and pulling unnaturally. The polygon count is quite low in edge looped characters as the need for definition is lower as the edge loops provide maximum definition. (Murdoch; Allen, 45-46)
Here are some examples of edge loops and the key areas in which they should be created:
Facial topology:
The eyes and the mouth are the main areas and then the nose cheeks and forehead should be connected to the adjacent areas as seems natural. (Cryrid, 1)
Limb and connection topology:
The shoulder’s movement must be diverse and fluid, and must be able to handle any stress the arm deals it. It must therefore allow strain in the chest area, the shoulder area and arm area, which in turn becomes edge loops connecting these areas. This is one method; another one is to take many edge loops to the shoulder allowing for much slack to deform and animation. ( Chadwick, 1)
The knee and elbow areas of the character must have edge loops like that of the mouth dedicated to handling the strain of a bend. Weight painting may become redundant if the loops in the areas are limited. (Chadwick, 1)
Edge loops are a clean and safe way to model, texture, rig and animate. They have many advantages and whether one uses the patch or the polygon method, the edge loops should be there making 3D character animation an easy and quick process. By using edge loops, one not only prevents problems, but creates a greater outcome.
Works Cited:
- Chadwick; Cryrid. Polycount Wiki. 2010-09-17.Web. 06 May 2011.
- Murdoch, K. L. and E. M. Allen Edgeloop Character Modeling: For 3D Professionals Only. Indiana, Wiley Publishing, Inc. 2006. Print.
- Osipa, J. Stop Staring: Facial Modeling and Animation Done Right. 3rd Ed. Indiana, Wiley Publishing, Inc. 2010.
- Ratner, P. Mastering 3D Animation. 2nd Ed. New York. Allworth press. 2006. Print.
- Unknown. Modeling With Edge Loops. 2008-04-02. Web. 01 May 2011.
Thursday, 14 April 2011
Sunday, 10 April 2011
Custom Toolbars and Controls
“To achieve the complex and natural motion of human characters, the rigger can use scripting to automate some of the movements and help speed up the animation process”(Murtack, 3).
In order to access certain controls and tools quickly to further speed up production and ensure an efficient workflow the user must customize their toolbars and controls.
In Softimage 2011, a custom toolbar is a floating bar that contains commands and access to menus; these are in the form of one-click buttons. (Unknown, 1) The function of custom toolbars is to have a quick access to commonly used controls and menus that have no shortcuts in the actual program. The program is limited in its access to certain functions and controls and so when one sets up a scene, the availability of these controls can be configured according to the user’s preference through toolbars. An example of such controls is a select all custom button. Instead of selecting all of the components one at a time, one can do this once for the scripting, take it into the toolbar and a button will be created and once accessed, the full selection will appear. This helps with animation and makes the process less time consuming.
Certain areas of the rig need to be controlled differently. Some areas only need to rotate, some to translate and some to scale. Some like to use the method of setting specific transform controls to specific parts of the body. This is done in XSI by just selecting the control point, selecting property and accessing the transform setup. There, one can choose the specific transform tool they will need for this control point. This is a time saving method as it prevents one from having to keep switching to the tool they want.
An example of controls is to reconfigure ‘sliders’ which are control points usually associated with fingers. One can adapt new sliders to the fingers and alter the control over them. This is done by firstly deleting the default sliders, creating a set of parameters and specifying a driving force( a control null as an example) and a driven target (the finger/s). You then link the two and set relative values and this is done through the parameter setup that uses a set of 0 to 1.
Now adding to this, one can also make sets of toolbars for transform points. In XSI one can create a tool, use the script and drag it onto the tool bar to create a button. So in my own project (I used the default rig and set certain transform settings for specific control points) I used the translate tool for moving the feet as a default. Then I made a toolbar that consisted of buttons that when accessed, automatically selected the rotate tool for the appropriate body part. This saved me time on making the man move through space and animating him as I did not have to take extra steps to get to the same desired result. The great thing about toolbars with XSI is that you can import pictures as your buttons which makes it easier to find for the eye and eye-catching.
I decided to use my own setup and project because I wanted to test out the toolbars and prove the efficiency of the toolbars. The toolbars proved to an effective way of accessing control tools and accessing areas of the body simultaneously and quickly.
I tried to animate my project and move the man along the z-axis and with the toolbars and control default I created, the animation was quicker than without those custom settings. For future projects this type of customization is essential when working on any project whether small or big because time is costly and workflow slows down when things are not organized.
Works Cited
· Murtack, J. Softimage XSI: Tutorial: Scripted Operated Shoulder. Avid Technology Inc. 2003
· Unknown. The Softimage XSI Interface. http://www.kxcad.net/Softimage_XSI/Softimage_XSI_Documentation/interface_TheSOFTIMAGEXSIInterface.htm
Sunday, 03 April 2011
Rigging- Difference between default rig and an internet rig. Discuss the importance of a good rig.
Rigging is a process whereby the character one has created from modelling and texturing, is provided with a skeletal structure. This skeleton allows and aides movement for animation. Like anatomy, the bones in the software provide a sturdy structure and the weights of the bones are distributed along the mesh of the character evenly enough to prevent deformation. (Softimage’s User Guide) Softimage and other 3D software programs have a set default rig that one can use. These programs also allow for the user to create their own rig based on chains, joints and control points. One can also acquire a rig from the internet, where expert ‘riggers’ post up their rigs for downloading. There are many pros and cons to each method and acquisition of rigs and one must find a method that suits their own personal needs.
Before one goes jumping to rigging and enveloping the skeletal rig to the mesh, one should use a guide to help them lead their rigging process. Most 3D software programs come with biped guides that guide your rigging. (Softimage User Guide) This guide can then be converted to a rig with the click of a button. This seems simple enough, but the weights of the rig may become distorted to a certain extent and when the leg is bent or the foot is rotated, distortion occurs, more than that of a rig made from scratch. This becomes problematic as the process of painting the mesh to fit the bones, character and movement is long and almost redundant in some respects, as the allocation of weight is very specific and once one changes the weight of one point, it will affect and deform another part of the mesh.
This default rig & character are slightly adjusted in their pose and positioning.
From recent, personal experience, I would preferably create my own rig as it is a fixture for a specific character and allows freedom and a certain amount of creativity. The amount of work effort is quite large and one may run into problems but it does not limit one’s needs in a rig and the weight painting in theory, should be less of an issue regarding deformation than that of the default XSI rig.
Using a rig from the internet is a much easier and more practical solution, as it allows for freedom of choice; one can compare the specifics of the rig and compare it to one’s character. For comparative purposes, I chose to use a rig designed around the XSI default character, but the creator made his own rig that was done over the course of two days. This rig is from the internet and it has been claimed to be made from scratch. Now just judging from the length of time it took, one can observe the laborious nature of rigging your own character. However, the controls are easy enough to use, some controls may be a bit confusing at first but one can figure out the controls by fiddling about. This rig, although similar to the default rig has more advantages as it allows more control for the neck, the chest and the twisting of the legs and arms, giving the character a diverse area and means of movement. The default Softimage rig is limited in its ability to move certain areas of the body and twist them according to how the character is needed to move. The internet rigs also seem more user-friendly and diverse in their rigging as the controls, bones and joints are easy to figure out and the rigs seem easily adaptable.
Figure 1.2- the internet rig.
However, when it comes to the weight distribution, it seems both the default and internet rigging is a bit of struggle to get right.
Rigging is an important aspect of 3D animation and if done correctly, one can get amazing results from a good rig setup. Rigging is understanding how the human, animal or character will move. The bones must have the correct angle drawn, to get the right inverse and forward kinematics. (Reallusion, 1) Rigging is a very complex process, each bone must have a certain hierarchy which determines the way it moves and the relative bones that move with it or move it. This is essential as one hierarchal mistake may lead to an intense distortion of not only bones, but the mesh and movement of that mesh.
Another important factor of rigging is naming. Although naming does not seem important in the beginning of the rigging process, one will find it difficult later on to identify which bone is which and how it must be placed within the hierarchy. This naming process makes parenting, moving and weight painting easier and quicker. One can get lost and ultimately stuck when trying to rig a character with no names attached to the bones. Other users of your rig will find it easier to use your rig if it is named. (Kundert-Gibbs & Dariush Derakhshani, 128)
Sometimes, it may be important to use a guide, the guides help form the rig for you and even if you do not use the default rig, it may come in handy when the rig you have is not working the way it should. It helps you pick up mistakes in the rig and align the rig to the character. (Valve Developer Community, 1)
One must not use too many bones in the rigging process unless the character or object requires so. The bones should be just enough as well as the control points. A good rig seems to give the character its ‘life’ and sense of realism. The more control, and the easier the control, means the better the control of the animation.
Rigging is a process that specificity and discipline. In order to create a great rig, one must have a great amount of knowledge on the process and be able to understand the ability of a good rig in order to create one. In regards to choosing a rig, whether the 3D programs default or one off the internet, one must the decision out of personal experience and ‘test driving’ both types to find the perfect fit for the character.
Works Cited
· Kundert-Gibbs, J. L., D. Derakhshani. Maya: secrets of the pros. 1999. USA, Inbit Incorporated. Web. Accessed 2011/04/02.
· Unknown (Wiki). Softimage’s User Guide 2011. Wiki, Accessed: 2011/ 04/02
· Unknown, What is IK/FK. Web. Accessed, 2011/04/02.
· Valve Developer Community. Rigging in XSI. (2009) Web. Accessed 2011/04/02
- Internet rig acquired from https://www.creativecrash.com/maya/downloads/character-rigs/c/human-rig.
Subscribe to:
Posts (Atom)