This user hasn't shared any biographical information
In art, like in physical exercise and other activities, warming up is often recommended. Whether in a formal art class, or in text references, there is almost always some attempt to get the students to get in the habit of skating the page with scribbles. This helps get a tactile sense of pressure, resistance, and speed of the substrate as marks are being laid down. Mike Mattesi, in his books and video lectures (which you can find links to in my reference tab above) also has a variation on this, where the purpose isn’t just to get a feel for the surface, but also as a quick reminder to the brain to be conscious of how curvature works with speed, and how the sharper the curve, the slower the speed. He uses the analogy of a car driving around a track, speeding up in the straight sections and slowing down to make the bends.
The result of that kind of exercise usually looks something like this:
All of this is extremely helpful, but I’ve been adding to my ‘warm-up’ routine as of late, because seriously, who ever warms up with just one stretch? But there has to be a purpose behind the warm-up and in this case, it’s about putting the mind in a place where it sees form and structure in addition to line. Taking Mr. Mattesi’s lead a bit, we draw two curved and forceful arcs, such as per his rule, that one of the forces leads directionally into the apex of the next, forming a rhythm. The key here is that I’m not looking to draw anything real. I’m just placing lines at this point.
Just two lines, right? Well, because of what we know of force from the references mentioned, the next step is to see form, and one of the keys to generating form is overlap, so let’s throw an example of that into the mix.
Now you can almost start to see the shape of something dimensional forming in the mind. Now keep in mind that though I’m not specifically planning any of these marks, I am still trying to be aware of space. If I put the marks too close together, seeing form and volume develop might be more difficult.
In any case, let’s go ahead and add a couple of straight lines to add some structure and contrast to the curves.
Mind you the order I’m doing this in is not gospel. Straights followed by overlap works just as well, but in the very beginning it helps to use the two curved lines in some kind of forceful arrangement. We can go further by adding some marks indicating volume:
Not my best endeavor, but that’s what warm ups are for, to work out the kinks and set your mind working right. I went ahead and did a little more to this one and added tone for effect, but not that you have to, unless you want to practice using tone to indicate depth at the same time, which is great, but as I’ve been told and instructed myself, it’s probably best to learn depth through line first, and tone later.
Anything you come up with here may end up looking rather surreal, but at the same time you may find yourself inadvertently drawing part of a figure. It’s whatever comes into your head on the spur of the moment. These are quick exercises, and doing them has helped me look for and see these same qualities in the representational work I go on to do after the warm-up.
That’s pretty much it. Sorry for the delay in updates the last couple of weeks, but I hoped to make it up a bit with a double post this week.
Until next time.
Posted in Uncategorized on March 24, 2012
Ever since I first started tackling Maya, and the subject of blend shape deformers, I wondered if there was a way around the most common way of using them; namely, the duplication of the original shape and the push-pull tweaking of the original shape . I’m not going to complain that said method is ineffective or inefficient, because if you’re using it to do what it’s most commonly tasked with, that of creating facial expressions on characters, then it’s certainly the best way to go about using them.
However, what if you wanted to have one shape completely change into another specific shape? Well, if you can visualize comparatively what the parts of those shape are that will become the parts of the new shape, then what I’ve learned is that indeed, you can do this.
The trick is, how to build two sets of objects side by side not only with the same number of vertices, but so that the vertices are ordered identically.
Fortunately, I covered a way to do this several weeks ago.
What follows is how to take that knowledge and applying it across several parts intended to be merged into one object without losing that ordering of vertices, or the intended effect of either blend-shape, independently able to be controlled.
To that end, I’ve created for this demonstration, what will end up being a single object with two blend shapes attached. For now though, that object takes the form of two separate objects one on top of the other, and their corresponding two blend-shape targets, all created independently of one another (not duplicated and modified) but tested to ensure that they have the same number and order of verts as the first using the methods already outlined previously.
From here, we simply create two blend shape nodes, one for the two top halves and one for the two bottom halves.
The rest of the process is relatively straightforward. After merging the objects, the surface can be made seamless again without re-ordering the verts by simply selecting and deleting one row and one row only of polygons from around the circumference of the morphing object. The reason this is, is because eliminating one row of faces along the seam line only eliminates the faces while leaving the verts and their ordering in place, and it’s the vertices that matter most when doing this. Now all that is left is to either use the bridge tool, or the append polygon tool to make the surface continuous. The final result is a morphing object that has two independently created targets applied to it, and with independent control, as you can see by the varied rate of morphing in the following clip:
Now, this method is not without it’s drawbacks. Primary among them is the fact that we are still left with multiple blend shape nodes, which bogs things down the more you have of them. Have not yet figured out whether or not it can ever be done so that both blend shape controls occupy the same node by modifying the steps shown here, but if I do, I’ll be sure to make an update about it.
I have several ideas on various burners going at any moment. While it’s great to have several ideas you find worthy enough to have your attention on, it makes organizing all of them a task that makes herding cats a promising enterprise.
To digress, though, I have been envisioning the kernel of a story that has just recently begun to take shape. What has really developed most recently, was the location and the ‘look’ of the setting where I intend to set the story. It came about relatively quickly, with the elements and how they’d be expressed falling into place.
It’s a sprawling, wide-open setting, inspired by visits of my own to the desert southwestern states, but amped-up on steroids. Since I have the software, I can conceptualize my ideas in virtual, three dimensional space. The scale of the setting however, is the largest I’ve ever attempted. A setting that spans literally tens of miles is a far cry from a small room, and while I’m well aware that I could simply scale the model down, I will always wonder in the back of my head whether or not my virtual camera would be capturing the image as it would be if it actually existed in the real world. Philosophically, the difference may be moot to some artists. It’s a fantasy setting so why not design it however you want it to, scale included? The CGI world is entirely of your creation so control all the aspects you can, right?
But instead, I’ve chosen to use literal scaling as a willingly accepted artistic restraint. It will impose certain restrictions upon me, but it will simultaneously remove a variable that my mind might otherwise continually have to revisit by wondering what something should look like vs. what it will look like because I’ve set a principle as a constant from the outset. Something that sounds restrictive, is liberating. I can still use real-world points of reference, without having to go to absurdly literal. Google and wikipedia are ready sources of proportion and scale for real-life features, so I can ballpark things without having to fuss over how “right” they should or shouldn’t look. If things need to be adjusted and fine-tuned later, they can be, like scaling individual elements for compositional purposes.
The main feature of the scene is a massive desert mesa, 23 miles long and 17 miles wide, with a secondary, a smaller but further elevated plateau will play host to a large metropolis while the larger and lower plateau is more agricultural in nature. All of this is surrounded by dry, desert landscape. Like an island in a dry ocean. Can’t help but to notice, real life geological features like Uluru (formerly Ayers Rock) in Australia come to mind as well. That kind of feature, known as an inselberg, combined with a traditional mesa, with a scale closer to those of features found on the ocean bottom (which can be much more massive in scale) probably describes it best. And having that in mind will help later on, as real pictures of Ularu and surrounding features, as well as some others of say, Monument Valley, will help inform artistic choices made later on.
I started with the mesa itself, using a reference from google maps and satellite images, I came up with a general shape, modified it slightly for my own purposes, and modeled it roughly using some of the techniques outlined in previous entries. I added two more basic elements to the setting: to help start giving a sense of scale, I designed one building as a centerpiece for the metropolis (which itself is about a 1.5x the height of the current world-record setting Burj-Khalifa – that’s the one featured in last year’s Mission Impossible film), and then I added a surface cut from a sphere with the mean radius of the Earth itself, and cut it down. At these scales, the curvature of the Earth might actually play a role, and it took little or no extra time to find out how to specify that detail.
Here’s the result, visually:
That tiny gray speck on the upper mesa? That’s the Burj-Khalifa sized structure. The rendering camera (the equivalent of a standard 35mm) had to be placed a staggering 30 miles away from the subject to fit it in the frame. That’s the scale.
Rendering at this scale posed a few issues. I chose to use raytraced shadowing, but kept the simplicity of a spotlight-lighting by placing it at a great distance away, simulating directional lighting. Other than that, I changed up the camera’s clipping planes, putting the near clipping plane at 5 units and extending (by necessity) the far clipping plane to an absurdly large number, to make sure that these features which are at great distances away, will still show up in the render. Finally, and most important for quality, under ray-tracing in the render settings, I bumped up the Trace Bias to 250 to eliminate rendering artifacts which showed up on the mesa object. Of particular note, for this project I am using Renderman for Maya as my renderer of choice. Settings for this feature may vary when using raytracing for another renderer, such as MentalRay, for example.
I’ll add some more features in the next entry to help develop scale and depth, but for now, I’ll simply change the tint of the primary light to better reflect a sunset, and lower the angle a little more as well:
And that’s where I’ll leave it for this week. In future weeks, we’ll hopefully continue to see how the decisions I’ve made here, impact future ones down the line.
Awhile back, I started to occasionally render out images in Maya with a z-depth version intended for adding a depth of field effect in post-production with Photoshop. There are a couple of different options in Photoshop, which utilize the black/white/gray information in the alpha channel to calculate how far away from the ‘camera plane’ the depth of field is most in focus. One, is the filter native to Photoshop, under Filters>Blur>Lens Blur, and the other is a proprietary add-on I’ve discovered made by Frischluft called Lenscare. They each have their strengths and weaknesses, but for the purpose of this entry, I’ll stick with the native Photoshop version.
I started wondering to myself, what it would look like to apply the filter to achieve a sort of similar effect to hand drawn art, or photographs, as an additional method to draw the attention of the eye to where you want it to be.
Here’s a drawing I did last year, and it’s oriented in a turned position, so it seemed appropriate to use this one for a test of this idea. So I proceeded to select the white space around the horse with the magic wand tool, refine the edges of the selection, inverted the selection so it highlighted the positive rather than the negative space, and moved over to the channels palette where I created a new alpha channel.
So, knowing that the lens blur filter utilizes gradations of black and white to delineate depth, with white being near and black being far, then for an abbreviated and quick design, let’s create a black/white gradient from left to right, so that the horse’s head and forelegs are brought ‘nearer’ to the viewer.
Now, going back to the RGB Layers palette, load the newly created alpha channel as your selection, and use the Lens Blur filter. The following window pops up, allowing you to adjust the location of the focal distance and the amount of blur.
And that’s pretty much it. With a little more effort on applying the lights and darks in the alpha channel, an even better approximation could be reached. Or, a diorama look could be applied by using only solid shapes of varying shades of gray to different objects in the work.
Oh joy, it’s time to rig faces in Maya.
It’s not something I look forward to, and that probably has something to do with the fact that the first time I went through these motions, I took a very literal path from point A to point B, in which I used wire tools, and cluster deformers to achieve facial movement.
But as with most subjects on this blog, I’m not satisfied with that method, as it’s tedious and time consuming, so this time I’m going to use Blend shapes, which can be equally torturous but for which I’m introducing an intermediary step linking wire deformers and Blend shapes using lattice deformers and cluster deformers to get there.
Here’s my assumed start position:
We’re going to start with the mouth and create a wire deformer for it, so I select the appropriate poly-edges around the mouth, and use the Modify>Convert>Polygon Edges to Curve to create the controlling wire.
Go ahead and create a new Wire Deformer from the Create Deformers menu using the the original curve and the object with the face you intend to animate. Now I know I said that we’re using other deformers as an intermediate to get to the Blend Shape deformers, but we’re going to actually create the Blend shape node now, using the two existing curves as the basis.
This is good, but the curve has so many control points that adjusting them for each shape will be almost as much of a pain as if we were adjusting verts by hand on the model. We could weed them out, but the fit would be sacrificed, not to mention that we’d have to delete the same CV’s on both curves in the same order to keep the Blend shape working as intended.
Rather than get bogged down in all that, create a lattice deformer around the top and bottom CV’s respectively. For demonstration purposes, I’ve just done the top set here for clarity.
The advantage of lattice deformers is that they take high density curves and polys and smooth the deformation over a larger area. We can control those now by manipulating clusters we can attach to the lattice points as seen in the second of two images above.
Let’s say this is the expression we want for our next blend shape. Duplicate the secondary curve in its’ deformed shape and add that as your next blend shape target. A new target with slider shows up in the Blend Shape window. Sliding it produces a duplicate of the exact same deformation that you got a moment ago manipulating the cluster. Another benefit of the cluster deformer, is the ability to reset its’ position back to ‘zero’, over and over again, repeatedly creating new Blend Shape targets, and so on and so forth.
Lastly, it should be noted that Blend Shape deformer nodes, as long as they are defined as “Local” when created, are free to be attached directly under the character or skeleton’s node without having to account for double transformations, a tedious step involved in most other types of animation control rigs. Just remember to parent both the wire and the baseWire shapes to the appropriate part of the anatomy and/or rig, and the blend shape should still work as intended.
Alton Brown, of Food Network fame, has made a point repeatedly (and not without reason, I might add) that he loves multipurpose tools, and hates single-purpose ones. And this is one of the reasons I love what Alton Brown does. He’s a practical man. Why expend energy and waste your resources on something that only has one obscure purpose? I mean, if the purpose is something that you’ll find yourself doing really, really often anyway, that’s one thing, but who really needs the Popiel Inside-The-Egg-Scrambler? That’s what I have a whisk for. And the whisk helps me when I make pancake and waffle batter as well.
That bi-rail tool I keep bringing up as it turns out, is good at more than four-sided surfaces. What if you have a surface that has no clear ‘corners’ to it? What if you want more than just the outermost edge loop in a shape to be concentric upon itself? What if you have a shape that comes to a point? Often muscle groups will do this, and if your modeling a more detailed figure, then you really might need for that muscle group to be well-defined accurately as coming to a triangular point. I’ll address both these situation in today’s entry.
Here’s the basic setup: Create a Nurbs circle, duplicate it and scale it up or down to fit inside or outside of the first, your choice. Selecting a curve point slightly offset from curve origin, detach and eliminate a small arc from the same area on both circles. Connect the two pairs of endpoints with new EP curves, and snap a few more EP curves spanning between the two pairs at whatever points you find will be key in defining shapes later. you should have something that looks like a broken wheel with a few spokes. I’ve made mine so that the two circles are offset in height as well, so it looks more like a cone.
Next, let’s move to the bi-rail-3 tool, as we’ve done before, select our U and V polygon resolution (in this case, the U direction is the one that circumvents the shape while the V direction is the one that is along the radius of the circular shape), enter the tool, select the spokes, hit enter, select the rails, hit enter again, and you get your basic shape.
Note, leave history on here, if you want to use the curves to continue to smoothly shape the polygons before doing your final edits. Just remember to leave the CV’s that touch other curves alone, or the mesh will disappear, because you’re using history, the curves that define it no longer intersect.
Now there are numerous ways to fill the gap and the hole in middle. In my case, I used the Bridge tool in the Edit Mesh menu, and a combination of Append to Polygon tool and the Split Polygon tool in the same to create a radial edge flow in one direction, and the desired continuous one in the circumferential direction.
The result here looks remarkably like a half-dome that you could just as easily produce by taking a stock sphere and cutting it in half. True, but again, here you can conform the final boundary that your shape will fill before you create the mesh. Can’t emphasize that enough. The push-pull-tweak is all but eliminated. Stock shapes are good for making stock shapes, and little else, I’ve discovered.
Now on to the triangular shape. We’ll define our boundary with only three curves this time. I’ve arranged them thus:
Almost like the shape of a deltoid, right? What we’re going to attempt here, is to create a situation in which the polys converge to the point on the left side of the shape in the image above, and round out in the other two corners. To control the direction, make sure the vertex where you want the polys to converge, has its two respective curves’ direction aimed at each other (either both curves ends at the same point, or the starting point of both curves converge at this point. (Head to head or tail to tail, never head-to-tail).
Using the Birail-1 tool, go about your business as usual, defining the curve that’s ‘away’ from the convergence point first, then selecting the two birails that form its path.
There are any number of ways you can now conduct the flow of poly’s around this kind of object. I ended up with this configuration with only a few adjustments:
Big lesson this week: Get to love versatility, and keep an eye out for things that let you achieve it.
Posted in Uncategorized on January 19, 2012
Usually the CG rendering process begins in a modeling program and ends with some post-production work in programs like Photoshop, or After Effects, but today I would like to ask: “Why stop there?”.
The computer loves to render smoothly graded, precision images. Images that are actually so clean that it can even call attention to itself for that reason. So, what if we rendered an image for the purpose of processing it, and re-using it as a texture image for another render?
Here’s the model you last saw me make two weeks ago:
Now we could just begin the process of re-using this image right away, but I figured out a neat little post-process design method in Illustrator that I liked the look of, so let’s do that first.
In Illustrator, live tracing the image in color gives something that looks like this:After a brief pit-stop in Photoshop to make it truly grayscale and save it out into an image format more easily read by Maya, we can now apply the image as a texture to a new material in several different ways. Such as…
Color, via a ramp texture.
Of course, a lot depends on how well you light your object in the original render, and how you are able to get your shadows to render just to the point where form is clearly outlined. Like in photography, squinting helps to preview what you’re going to get on the other end of this tactic.
One of the projects I remember from an early design class, where the goal was to build-up texture, you had to, well, build marks up over old marks, over and over again, often using different media and creating something with some real interest and depth at the end of the run, which was well far away from the first layer.
Digitally speaking one can continue to migrate back and forth between different image editing and rendering programs and expand the versatility of your end results much in the same way that continually using and reusing different media on a canvas or board results in greater depth. The post production process isn’t necessarily the end of the line, unless you choose it to be.
I’m going to build on the last two weeks’ worth of discoveries with a bit of a slide show indicating some of the variations of this new-found technique as I went through and finished the model’s head. Combining these with what we’ve gone through already, it’s starting to form a picture of a comprehensive skillset that can efficiently model entire figures. If I can compare it to anything, I’d compare it to custom plate fabrication, where you break down your character into a set of plates or shapes, and those plates have to be fabricated to fit the number of polys along each edge. So any way you can arrive at what the parameters of each plate are, then you can apply the techniques already discussed.
For example, as some of the illustrations below will demonstrate, even if you can landmark key points of a surface in three-dimensional space say… using Locators for example… you can string curves through them forming the boundaries you need to proceed.
But what if, for example, the rest of the body you’re going to attach this head to only has 16 polys around the neck, while your head is being modeled at a higher poly count? This isn’t all that unexpected a situation, as the face often is modeled at higher resolution for the sake of accommodating the smaller features of the face as well as providing smoother animation for the subtleties of facial movements. The rest of the body isn’t as nuanced as this, except perhaps for the extremities of the hands and feet. In any case, the gap must be bridged, unless the parts behind clothing aren’t going to be modeled at all (binding multiple objects representing the emergent and visible parts of the model only). I ran into that situation here. I wanted to limit the back left quarter of the neck to a width of four polys. But this is what I had to work with:
If you look carefully, you can see already my thought process. I intend to take the last four polys and spread them out wider, but there isn’t a place for those remaining five polys to go, right?
Now the extra polys have been routed around the base of the neck rather than down the neck into the rest of the body. I did partition out the neck area to receive them too, but that’s a function of the principle above about sub-dividing areas.
I’ve modeled a couple more heads and a hand in the span of time betwween last week’s entry and today’s using these procedures and I have yet to run into a situation that using it, in conjunction with a little creative thinking, hasn’t been able to resolve. The hand I did already had the rest of the model waiting for it, but when I imported it in, I was confident that it would blend seamlessly with the rest, and it did. That’s not something I can say for all the character modeling work I’ve done thus far, but alas, I didn’t know then what I know now.
But what if you want a surface with a continuous, smooth, single edge loop of polys running around it, rather than a shape bounded by four separate edge loop flows?
That’s it for this week. If there’s more to be mined from this method that I can dig up, you’ll see it shared here. Until then, the subject may wander to other venues again…
In furtherance of last week’s entry, I decided to continue my pursuit of the birail tool in Maya, and it has been something of a revelatory experience. With the discovery of a couple additional tools used in combination with the birail, I now realize the potential to avoid much of the absolute mess that comes with building the model and having the polygons that you’re trying so hard to sew together only to find that they just do not line up, or have a huge discrepancy in the number of polys from one side to the other, forcing you to spend (oftentimes) hours figuring out how to route the edge loops and still have the model move and deform correctly as it should. With all the work at stake, you really don’t want to have to revisit the model’s construction. Here’s a workflow that might help you out if you find yourself in my position.
Before we start, here’s a list of the tools we’ll be using in this entry:
Modify–>Convert–>Polygon Edges to Curve
Create –> Bezier Curve Tool
Edit Curves –> Open/Close Curves
Edit Curves –>Detach Curves
Edit Curves –> Attach Curves
Edit Curves –> Cut Curves
Surfaces –> Birail 2 Tool
As the basis for this week’s example, I’m pulling one of my old characters out for a re-design. It will also be a test to see how effective the technique I outlined with a throwaway model in my last entry, with one that actually may have use for me in the future. Matt Wulf was the first comic character I created. As my drawing experience at that time was almost non-existent, he wasn’t that well designed, and even in his last incarnation, he, as well as the rest of the cast tended to lack visual structure, something I’ve come to appreciate more in the years between. Worst of all, he seemed generic. So here I’ve opened a Maya window and put in a new profile and front view of the re-designed head behind their respective cameras. Of special note, is the emphasis on lines that delineate structure in both views.
Last time, with the leg example, I used the basic side-front camera views to model profiles on the axis, but this meant I still had to reposition the curves for alignment afterward. To skip some of that this time, we’re going to start with a few construction planes. It should also be noted here, that in doing this, it’s essentially taking the lesson from my entry here and simply adding a little bit of planning, and different modeling tools.
So I’ll start with the construction planes. We’ll do the muzzle shape first.
One for the side, one for the front and one for the top. Making the side plane ‘live’, we can switch to the side view, and draw on the canted angle using your preferred method for curve creation (in this case, a bezier curve) thus:
Now, thinking about how high a resolution I’d like to maintain for the model, in this case, about two polys per inch, I see that the coverage on the projected plane surface is about 4×4 inches, so I’ll aim for a poly grid that’s 8×8. But we can’t use the birail tool like we want to until we have at least four curves whose ends all touch. Fortunately, Maya’s Edit Curves menu includes Detach Curves, which will break apart the single curve into multiples wherever we wish to do so, while leaving the ends of each in tangent with the others, as the birail tool requires. Choosing the points to break apart the curve becomes easier with practice, but the general idea is to imagine where the edge flow will merge with your next application of this cycle on a conjoining part of the anatomy. In this case, the shape almost suggests four sides, with corners at the bridge of the muzzle, the corner of the mouth, the tip below the nose, and… somewhere in the sweeping curve on the front of the muzzle? That’s going to take some judgement, but one clue is generally looking in the area of the curve apex. Choose these points by right clicking on the curve and selecting “Curve Point”. If you hold down shift now while clicking, you can select all four points at once, and then click on detach curves, giving us the following:Now to keep track of direction, I came up with my own convention for using the birail tool. That is, I always select top-bottom before left-right. This isn’t a law or even a rule, but just a good principle that you should probably have a convention that works for you and stick with it. There still may be times when you want to switch convention, because the quality of the mesh may differ substantially using the opposite convention. If you’re satisfied with the results of your regular convention though, then you’re good to go. The first pairing of curves you pick will be divided into the number in the “U” field, less one, while the second pairing of curves will be divided into the number in the “V” field, less one. The settings used for the birail tool are covered in the previous entry, for reference if you need it. Going on with my convention however, and applying the birail tool in using my convention yields:
And with a little application of the Sculpt Geometry tool set to “relax”…
I think now is as good a time as any to bring up a few troubleshooting points that I came across while refining this method. First, when creating your curves, you can use the Open/Close Curve tool to make sure you have a closed loop, which is great for saving you the trouble of snapping CV’s manually. However, be sure to remember that in doing this, the curve still has a discontinuity where the beginning and end of curve meets. When you begin detaching curves into components for the birail tool, this may result in a ‘fifth’ curve you didn’t plan on having, and it may be small enough in the viewport to easily miss and derail your birail, so to speak. It’s easily fixed by using the “Attach Curve” tool re-merge that extra strand with the nearest larger curve and we’re back to four curves again, like we should. Also, turn two-sided lighting off on all your viewports. It’ll help keep you aware of the direction all your surface normals are aiming, and letting you reverse normals as necessary to keep everything consistent.
Also of note, remember to ‘flush’ your history often. Some curve modification tools demand a clean object history to function properly (while others won’t work unless History is “on”, so you can’t necessarily turn it off completely either).
In any case, we’re ready to merge these two surfaces, so assuming you’re already familiar with how to do that, we’ll proceed from that point. Let’s select a row of faces on the first shell we created, because it’s the larger of the two (in poly count) and delete it. All that remains for a more natural blending here is to use either the Bridge tool, or the Append Polygon tool. The former is quicker, but often glitchy compared with the latter, so take that as you will.
Okay. Not too bad, but we’re not seeing much in the way of form yet. Let’s do the top of the muzzle next. I was going to use a construction plane here as with the other two surfaces, but on reflection, Matt’s nose is far too much like a ski-jump to use such a flat modeling surface effectively, so let’s look into using a makeshift Nurbs surface to use as a canvas and follow the contours more closely. I can still use the construction plane, but only as a place to create the dummy Nurbs surface. Note, that I don’t have to do anything in creating this nurbs surface other than make sure the shape and size is accurate to my eye. I’m not thinking about matching polys at all here, because I don’t have to any more, not until after we’ve drawn curves on this nurbs surface at least. That in itself is liberating.
Now I’ll take that little ski-jump there, and make it our live object. Then move to the front view where I’ll draw on it, following the contours of the bridge of the nose as closely as possible, and then bring it back up the mid-line. We can’t close the loop yet, however, because drawing a curve on a Nurbs surface the way we’ve done here makes the curve part of the Nurbs object, and we want the curve to be its own object. To do this, we go back to the Edit Curves menu and select the first item on the list, the Duplicate Surface Curves tool. Now that we’ve got the curve we need, we can delete the Nurbs object and close the curve to make it a loop. Then we proceed as we ordinarily would, detaching, and using the birail tool.
And as before, combining objects, deleting the appropriate number of edge polys, and bridging between the shells.
I get the feeling that if I were to just make a single birail surface out of this loop, it might end up being to ‘soft’ and button like. I want to add more structure, so I think I’ll do it in two passes, breaking down the single loop into two loops. The tactic I’m going to employ in choosing the locations to break the loop is twofold. I’m going to align those points as closely as possible to the perceived “turning edge” of the model where front becomes side or top, and two, line them up with the nearest vertex on the existing model. I’ll continue to re-attach and detach curves as I deem necessary to continue to create birails that sufficiently fill in the gaps and provide structure. I’ve numbered the edges as I reconfigure them to accommodate each successive birail in the following images…
In the last image in that string, you can see I pulled out the Insert Edge tool to accommodate the large polys that developed along the bridge of the nose.
Here’s a comparison shot to see where we’re at compared to the drawings.
The profile view is pretty much spot-on, though we can still see that in the front view, in the transition area between muzzle and cheekbone, there is some divergence. How I fix that will be evident when I pick this up in next week’s entry, where I investigate using these methods with solid stand-in shapes and intersections.
As a bit of a post-script to this entry, I have to wonder if anyone has come across this method of model-building yet, with so much potential to avoid the agonizing prospect of hammering away at vertices or individually building edge loop after edge-loop through extrusion. It’s just not a very intuitive artistic process. One of my fairly regular habits is searching for techniques and methods others have found in how they approach certain problems, and see if I can adapt them for what I’m doing. But on this particular subject, what I have found out there rarely ventures away from only the most rudimentary techniques. If anyone has any further ideas for refining or combining this method with others they’ve seen, I welcome all comments to that end.
As mentioned in my resources page, one of my most significant go-to books for drawing is the Force series, illustrating brilliantly how throughout living organisms, there is a rhythm, a path where metaphorical ‘force’ ricochets from one part of the body to the next. Keeping this in mind, I decided to see if there was a way to more directly translate or ‘sculpt’ in CGI modeling software (Maya, for the purposes of this entry) this concept, by starting with drawings that were produced using this method, but then arriving at a model by mimicking those methods using standard modelling tools in Maya.
Here’s where I started:
Just a very quickly sketched out leg. In retrospect, I could have pushed the forces a lot more, but you can still clearly see them sweep from the front thigh through the knee, into the back of the calf, into the heel.
Most modeling tutorials I’ve seen will either instruct the student to start with stock cylindrical objects and continue to push and pull vertices to match the geometry until you’re ready to peel back your eyelids from the sheer inanity, or create one edge loop at a time, bridging the new one with the previous. It’s an improvement, but a slight one.
I think, however, that I have found a way to more easily construct limbs and bodies for characters relying almost entirely on your sketches in the creation of the geometry rather than hammering stock geometry into your desired form. At the very least, it should minimize the amount of that kind of irritation.
Anyway, here we go…
I’ve imported the two views of the leg into Maya’s front and Side camera views respectively. In the Perspective view, that looks like this:
Now, in the Side and Front views, I use my choice of curve tools (in this case, a combination of EP curve and Bezier curve tools to trace the front and back side of the leg in the Side view, and the inside and outside profiles of the leg in the Front view. You’ll notice that they probably won’t line up the way they should, as the centerline of the limb is not necessarily plumb vertical the way the planes you’ve been drawing your curves on, are. With a few rotations and translations, you’ll get something that looks like this:
Also shown above are two loops that connect the tops and bottoms of the curves. These are important for the tool I ended up using to create the surfaces. Starting with the line towards the back (the heel side) and with curve-snap on, I used the EP curve tool to make the loops, and I moved in a counterclockwise (from above) direction in both. Consistency in creating the top and bottom loops is critical.
Next, I select the Birail 3+ tool from the Edit Curves menu and set it to the following settings:
Using the general tesselation setting here allows you to build the leg in four surface wedges that have the same number of polys along each edge and have them mostly line up closely enough that they can be merged more easily together in the following steps. Take note here! The number of polygons in the U and V directions will be one less than the number indicated in the initial tesselation controls. So if you want your limb to be 30 units long, enter 31 into the U field. You might also want to take note at this point of how many polygons you want your limb to be around as this will come in handy when modeling the torso and how many polys around the opening is for your limb to connect into. We’ll be doing this four times to bring the limb surface to a full 360 degrees before stitching all four together, so take the number in your V field, subtract one from it, and multiply by four. That will be the number to remember for attaching these limbs more easily later.
Starting again from the place where you started your loops, start the birail 3+ tool and follow its instructions, moving counterclockwise as before, one segment at a time until you’re at the last segment. Caution! Have two-sided lighting turned off here, so you can easily detect if the normals of the surfaces you’re creating in these steps are facing outward the way they should be! Also, in between each use of the birail 3+ tool, make absolutely sureyou remember to delete history on the surfaces.
Before you do the last quarter of the leg, Maya needs for the direction on the loops to be reversed (otherwise, it will span the paths opposite to the direction we need). I can’t say it enough, but make sure you’ve deleted the history on the other three surfaces you’ve already created or when you reverse the direction on the paths, it will affect your work, and when I say ‘affect’, I mean ‘make it look screwy’.
All right, once you’ve reversed the direction on the paths (Edit Curves—>Reverse Curve Direction) then you can finish off the last quarter by repeating the above process going clockwise this time.
We’re in the home stretch now. If you’re familiar with the basics of Maya, then you know how to combine objects, merge vertices, etc. It also can’t hurt to make sure once more that all the surface normals are facing outward before doing that.
Once that’s done, and with less than 30 seconds worth of smoothing and sculpting with the surface sculpting tools, here’s the final result, lit and rendered.
Now, you can tell this is still somewhat unrefined, but your starting shape is much closer to the end result than it would be using more traditional techniques. All the polys are quads, so they’ll deform quite well when animated, and using this method you can plan out your other limbs and the torso ahead of time more easily than (what personal experience has taught me) freewheeling lets you do.