Archive for category Uncategorized
Ever since I first started tackling Maya, and the subject of blend shape deformers, I wondered if there was a way around the most common way of using them; namely, the duplication of the original shape and the push-pull tweaking of the original shape . I’m not going to complain that said method is ineffective or inefficient, because if you’re using it to do what it’s most commonly tasked with, that of creating facial expressions on characters, then it’s certainly the best way to go about using them.
However, what if you wanted to have one shape completely change into another specific shape? Well, if you can visualize comparatively what the parts of those shape are that will become the parts of the new shape, then what I’ve learned is that indeed, you can do this.
The trick is, how to build two sets of objects side by side not only with the same number of vertices, but so that the vertices are ordered identically.
Fortunately, I covered a way to do this several weeks ago.
What follows is how to take that knowledge and applying it across several parts intended to be merged into one object without losing that ordering of vertices, or the intended effect of either blend-shape, independently able to be controlled.
To that end, I’ve created for this demonstration, what will end up being a single object with two blend shapes attached. For now though, that object takes the form of two separate objects one on top of the other, and their corresponding two blend-shape targets, all created independently of one another (not duplicated and modified) but tested to ensure that they have the same number and order of verts as the first using the methods already outlined previously.
From here, we simply create two blend shape nodes, one for the two top halves and one for the two bottom halves.
The rest of the process is relatively straightforward. After merging the objects, the surface can be made seamless again without re-ordering the verts by simply selecting and deleting one row and one row only of polygons from around the circumference of the morphing object. The reason this is, is because eliminating one row of faces along the seam line only eliminates the faces while leaving the verts and their ordering in place, and it’s the vertices that matter most when doing this. Now all that is left is to either use the bridge tool, or the append polygon tool to make the surface continuous. The final result is a morphing object that has two independently created targets applied to it, and with independent control, as you can see by the varied rate of morphing in the following clip:
Now, this method is not without it’s drawbacks. Primary among them is the fact that we are still left with multiple blend shape nodes, which bogs things down the more you have of them. Have not yet figured out whether or not it can ever be done so that both blend shape controls occupy the same node by modifying the steps shown here, but if I do, I’ll be sure to make an update about it.
I have several ideas on various burners going at any moment. While it’s great to have several ideas you find worthy enough to have your attention on, it makes organizing all of them a task that makes herding cats a promising enterprise.
To digress, though, I have been envisioning the kernel of a story that has just recently begun to take shape. What has really developed most recently, was the location and the ‘look’ of the setting where I intend to set the story. It came about relatively quickly, with the elements and how they’d be expressed falling into place.
It’s a sprawling, wide-open setting, inspired by visits of my own to the desert southwestern states, but amped-up on steroids. Since I have the software, I can conceptualize my ideas in virtual, three dimensional space. The scale of the setting however, is the largest I’ve ever attempted. A setting that spans literally tens of miles is a far cry from a small room, and while I’m well aware that I could simply scale the model down, I will always wonder in the back of my head whether or not my virtual camera would be capturing the image as it would be if it actually existed in the real world. Philosophically, the difference may be moot to some artists. It’s a fantasy setting so why not design it however you want it to, scale included? The CGI world is entirely of your creation so control all the aspects you can, right?
But instead, I’ve chosen to use literal scaling as a willingly accepted artistic restraint. It will impose certain restrictions upon me, but it will simultaneously remove a variable that my mind might otherwise continually have to revisit by wondering what something should look like vs. what it will look like because I’ve set a principle as a constant from the outset. Something that sounds restrictive, is liberating. I can still use real-world points of reference, without having to go to absurdly literal. Google and wikipedia are ready sources of proportion and scale for real-life features, so I can ballpark things without having to fuss over how “right” they should or shouldn’t look. If things need to be adjusted and fine-tuned later, they can be, like scaling individual elements for compositional purposes.
The main feature of the scene is a massive desert mesa, 23 miles long and 17 miles wide, with a secondary, a smaller but further elevated plateau will play host to a large metropolis while the larger and lower plateau is more agricultural in nature. All of this is surrounded by dry, desert landscape. Like an island in a dry ocean. Can’t help but to notice, real life geological features like Uluru (formerly Ayers Rock) in Australia come to mind as well. That kind of feature, known as an inselberg, combined with a traditional mesa, with a scale closer to those of features found on the ocean bottom (which can be much more massive in scale) probably describes it best. And having that in mind will help later on, as real pictures of Ularu and surrounding features, as well as some others of say, Monument Valley, will help inform artistic choices made later on.
I started with the mesa itself, using a reference from google maps and satellite images, I came up with a general shape, modified it slightly for my own purposes, and modeled it roughly using some of the techniques outlined in previous entries. I added two more basic elements to the setting: to help start giving a sense of scale, I designed one building as a centerpiece for the metropolis (which itself is about a 1.5x the height of the current world-record setting Burj-Khalifa – that’s the one featured in last year’s Mission Impossible film), and then I added a surface cut from a sphere with the mean radius of the Earth itself, and cut it down. At these scales, the curvature of the Earth might actually play a role, and it took little or no extra time to find out how to specify that detail.
Here’s the result, visually:
That tiny gray speck on the upper mesa? That’s the Burj-Khalifa sized structure. The rendering camera (the equivalent of a standard 35mm) had to be placed a staggering 30 miles away from the subject to fit it in the frame. That’s the scale.
Rendering at this scale posed a few issues. I chose to use raytraced shadowing, but kept the simplicity of a spotlight-lighting by placing it at a great distance away, simulating directional lighting. Other than that, I changed up the camera’s clipping planes, putting the near clipping plane at 5 units and extending (by necessity) the far clipping plane to an absurdly large number, to make sure that these features which are at great distances away, will still show up in the render. Finally, and most important for quality, under ray-tracing in the render settings, I bumped up the Trace Bias to 250 to eliminate rendering artifacts which showed up on the mesa object. Of particular note, for this project I am using Renderman for Maya as my renderer of choice. Settings for this feature may vary when using raytracing for another renderer, such as MentalRay, for example.
I’ll add some more features in the next entry to help develop scale and depth, but for now, I’ll simply change the tint of the primary light to better reflect a sunset, and lower the angle a little more as well:
And that’s where I’ll leave it for this week. In future weeks, we’ll hopefully continue to see how the decisions I’ve made here, impact future ones down the line.
Alton Brown, of Food Network fame, has made a point repeatedly (and not without reason, I might add) that he loves multipurpose tools, and hates single-purpose ones. And this is one of the reasons I love what Alton Brown does. He’s a practical man. Why expend energy and waste your resources on something that only has one obscure purpose? I mean, if the purpose is something that you’ll find yourself doing really, really often anyway, that’s one thing, but who really needs the Popiel Inside-The-Egg-Scrambler? That’s what I have a whisk for. And the whisk helps me when I make pancake and waffle batter as well.
That bi-rail tool I keep bringing up as it turns out, is good at more than four-sided surfaces. What if you have a surface that has no clear ‘corners’ to it? What if you want more than just the outermost edge loop in a shape to be concentric upon itself? What if you have a shape that comes to a point? Often muscle groups will do this, and if your modeling a more detailed figure, then you really might need for that muscle group to be well-defined accurately as coming to a triangular point. I’ll address both these situation in today’s entry.
Here’s the basic setup: Create a Nurbs circle, duplicate it and scale it up or down to fit inside or outside of the first, your choice. Selecting a curve point slightly offset from curve origin, detach and eliminate a small arc from the same area on both circles. Connect the two pairs of endpoints with new EP curves, and snap a few more EP curves spanning between the two pairs at whatever points you find will be key in defining shapes later. you should have something that looks like a broken wheel with a few spokes. I’ve made mine so that the two circles are offset in height as well, so it looks more like a cone.
Next, let’s move to the bi-rail-3 tool, as we’ve done before, select our U and V polygon resolution (in this case, the U direction is the one that circumvents the shape while the V direction is the one that is along the radius of the circular shape), enter the tool, select the spokes, hit enter, select the rails, hit enter again, and you get your basic shape.
Note, leave history on here, if you want to use the curves to continue to smoothly shape the polygons before doing your final edits. Just remember to leave the CV’s that touch other curves alone, or the mesh will disappear, because you’re using history, the curves that define it no longer intersect.
Now there are numerous ways to fill the gap and the hole in middle. In my case, I used the Bridge tool in the Edit Mesh menu, and a combination of Append to Polygon tool and the Split Polygon tool in the same to create a radial edge flow in one direction, and the desired continuous one in the circumferential direction.
The result here looks remarkably like a half-dome that you could just as easily produce by taking a stock sphere and cutting it in half. True, but again, here you can conform the final boundary that your shape will fill before you create the mesh. Can’t emphasize that enough. The push-pull-tweak is all but eliminated. Stock shapes are good for making stock shapes, and little else, I’ve discovered.
Now on to the triangular shape. We’ll define our boundary with only three curves this time. I’ve arranged them thus:
Almost like the shape of a deltoid, right? What we’re going to attempt here, is to create a situation in which the polys converge to the point on the left side of the shape in the image above, and round out in the other two corners. To control the direction, make sure the vertex where you want the polys to converge, has its two respective curves’ direction aimed at each other (either both curves ends at the same point, or the starting point of both curves converge at this point. (Head to head or tail to tail, never head-to-tail).
Using the Birail-1 tool, go about your business as usual, defining the curve that’s ‘away’ from the convergence point first, then selecting the two birails that form its path.
There are any number of ways you can now conduct the flow of poly’s around this kind of object. I ended up with this configuration with only a few adjustments:
Big lesson this week: Get to love versatility, and keep an eye out for things that let you achieve it.
Usually the CG rendering process begins in a modeling program and ends with some post-production work in programs like Photoshop, or After Effects, but today I would like to ask: “Why stop there?”.
The computer loves to render smoothly graded, precision images. Images that are actually so clean that it can even call attention to itself for that reason. So, what if we rendered an image for the purpose of processing it, and re-using it as a texture image for another render?
Here’s the model you last saw me make two weeks ago:
Now we could just begin the process of re-using this image right away, but I figured out a neat little post-process design method in Illustrator that I liked the look of, so let’s do that first.
In Illustrator, live tracing the image in color gives something that looks like this:After a brief pit-stop in Photoshop to make it truly grayscale and save it out into an image format more easily read by Maya, we can now apply the image as a texture to a new material in several different ways. Such as…
Color, via a ramp texture.
Of course, a lot depends on how well you light your object in the original render, and how you are able to get your shadows to render just to the point where form is clearly outlined. Like in photography, squinting helps to preview what you’re going to get on the other end of this tactic.
One of the projects I remember from an early design class, where the goal was to build-up texture, you had to, well, build marks up over old marks, over and over again, often using different media and creating something with some real interest and depth at the end of the run, which was well far away from the first layer.
Digitally speaking one can continue to migrate back and forth between different image editing and rendering programs and expand the versatility of your end results much in the same way that continually using and reusing different media on a canvas or board results in greater depth. The post production process isn’t necessarily the end of the line, unless you choose it to be.
I’m going to build on the last two weeks’ worth of discoveries with a bit of a slide show indicating some of the variations of this new-found technique as I went through and finished the model’s head. Combining these with what we’ve gone through already, it’s starting to form a picture of a comprehensive skillset that can efficiently model entire figures. If I can compare it to anything, I’d compare it to custom plate fabrication, where you break down your character into a set of plates or shapes, and those plates have to be fabricated to fit the number of polys along each edge. So any way you can arrive at what the parameters of each plate are, then you can apply the techniques already discussed.
For example, as some of the illustrations below will demonstrate, even if you can landmark key points of a surface in three-dimensional space say… using Locators for example… you can string curves through them forming the boundaries you need to proceed.
But what if, for example, the rest of the body you’re going to attach this head to only has 16 polys around the neck, while your head is being modeled at a higher poly count? This isn’t all that unexpected a situation, as the face often is modeled at higher resolution for the sake of accommodating the smaller features of the face as well as providing smoother animation for the subtleties of facial movements. The rest of the body isn’t as nuanced as this, except perhaps for the extremities of the hands and feet. In any case, the gap must be bridged, unless the parts behind clothing aren’t going to be modeled at all (binding multiple objects representing the emergent and visible parts of the model only). I ran into that situation here. I wanted to limit the back left quarter of the neck to a width of four polys. But this is what I had to work with:
If you look carefully, you can see already my thought process. I intend to take the last four polys and spread them out wider, but there isn’t a place for those remaining five polys to go, right?
Now the extra polys have been routed around the base of the neck rather than down the neck into the rest of the body. I did partition out the neck area to receive them too, but that’s a function of the principle above about sub-dividing areas.
I’ve modeled a couple more heads and a hand in the span of time betwween last week’s entry and today’s using these procedures and I have yet to run into a situation that using it, in conjunction with a little creative thinking, hasn’t been able to resolve. The hand I did already had the rest of the model waiting for it, but when I imported it in, I was confident that it would blend seamlessly with the rest, and it did. That’s not something I can say for all the character modeling work I’ve done thus far, but alas, I didn’t know then what I know now.
But what if you want a surface with a continuous, smooth, single edge loop of polys running around it, rather than a shape bounded by four separate edge loop flows?
That’s it for this week. If there’s more to be mined from this method that I can dig up, you’ll see it shared here. Until then, the subject may wander to other venues again…
As mentioned in my resources page, one of my most significant go-to books for drawing is the Force series, illustrating brilliantly how throughout living organisms, there is a rhythm, a path where metaphorical ‘force’ ricochets from one part of the body to the next. Keeping this in mind, I decided to see if there was a way to more directly translate or ‘sculpt’ in CGI modeling software (Maya, for the purposes of this entry) this concept, by starting with drawings that were produced using this method, but then arriving at a model by mimicking those methods using standard modelling tools in Maya.
Here’s where I started:
Just a very quickly sketched out leg. In retrospect, I could have pushed the forces a lot more, but you can still clearly see them sweep from the front thigh through the knee, into the back of the calf, into the heel.
Most modeling tutorials I’ve seen will either instruct the student to start with stock cylindrical objects and continue to push and pull vertices to match the geometry until you’re ready to peel back your eyelids from the sheer inanity, or create one edge loop at a time, bridging the new one with the previous. It’s an improvement, but a slight one.
I think, however, that I have found a way to more easily construct limbs and bodies for characters relying almost entirely on your sketches in the creation of the geometry rather than hammering stock geometry into your desired form. At the very least, it should minimize the amount of that kind of irritation.
Anyway, here we go…
I’ve imported the two views of the leg into Maya’s front and Side camera views respectively. In the Perspective view, that looks like this:
Now, in the Side and Front views, I use my choice of curve tools (in this case, a combination of EP curve and Bezier curve tools to trace the front and back side of the leg in the Side view, and the inside and outside profiles of the leg in the Front view. You’ll notice that they probably won’t line up the way they should, as the centerline of the limb is not necessarily plumb vertical the way the planes you’ve been drawing your curves on, are. With a few rotations and translations, you’ll get something that looks like this:
Also shown above are two loops that connect the tops and bottoms of the curves. These are important for the tool I ended up using to create the surfaces. Starting with the line towards the back (the heel side) and with curve-snap on, I used the EP curve tool to make the loops, and I moved in a counterclockwise (from above) direction in both. Consistency in creating the top and bottom loops is critical.
Next, I select the Birail 3+ tool from the Edit Curves menu and set it to the following settings:
Using the general tesselation setting here allows you to build the leg in four surface wedges that have the same number of polys along each edge and have them mostly line up closely enough that they can be merged more easily together in the following steps. Take note here! The number of polygons in the U and V directions will be one less than the number indicated in the initial tesselation controls. So if you want your limb to be 30 units long, enter 31 into the U field. You might also want to take note at this point of how many polygons you want your limb to be around as this will come in handy when modeling the torso and how many polys around the opening is for your limb to connect into. We’ll be doing this four times to bring the limb surface to a full 360 degrees before stitching all four together, so take the number in your V field, subtract one from it, and multiply by four. That will be the number to remember for attaching these limbs more easily later.
Starting again from the place where you started your loops, start the birail 3+ tool and follow its instructions, moving counterclockwise as before, one segment at a time until you’re at the last segment. Caution! Have two-sided lighting turned off here, so you can easily detect if the normals of the surfaces you’re creating in these steps are facing outward the way they should be! Also, in between each use of the birail 3+ tool, make absolutely sureyou remember to delete history on the surfaces.
Before you do the last quarter of the leg, Maya needs for the direction on the loops to be reversed (otherwise, it will span the paths opposite to the direction we need). I can’t say it enough, but make sure you’ve deleted the history on the other three surfaces you’ve already created or when you reverse the direction on the paths, it will affect your work, and when I say ‘affect’, I mean ‘make it look screwy’.
All right, once you’ve reversed the direction on the paths (Edit Curves—>Reverse Curve Direction) then you can finish off the last quarter by repeating the above process going clockwise this time.
We’re in the home stretch now. If you’re familiar with the basics of Maya, then you know how to combine objects, merge vertices, etc. It also can’t hurt to make sure once more that all the surface normals are facing outward before doing that.
Once that’s done, and with less than 30 seconds worth of smoothing and sculpting with the surface sculpting tools, here’s the final result, lit and rendered.
Now, you can tell this is still somewhat unrefined, but your starting shape is much closer to the end result than it would be using more traditional techniques. All the polys are quads, so they’ll deform quite well when animated, and using this method you can plan out your other limbs and the torso ahead of time more easily than (what personal experience has taught me) freewheeling lets you do.
Look! Look to the right!
Under the Blogroll area you’ll see a new link. This takes you directly to the other blog I maintain here on WordPress, the one I dedicate solely to discussing my photography, the things I’ve learned about it, and the learning I’ve applied towards it.
Take a peek if you’re game. I welcome visitors there as well.
Observation: arts of very different stripes and colors have a very important shared demand of their students : the ability to see things differently than most people perceive or are consciously aware of in day-to-day life.
Case in point: The visual arts aren’t the only art I’ve had the pleasure to take up, and among the others is the martial art of Aikido. But in both I’ve noticed something happened, and the way you react to your environment changes and you begin to perceive the way things are just a little differently, just enough to be able to accomplish the goal of both efficiently and fluidly.
Aikido depends on taking the momentum of an attacker and turning it against them. But doing this effectively means having to resist the instinctive desire to well, resist. Aikido is not a contest of strength, but if you keep the attacker at arms length and your balance. You’re just essentially waiting for him to give you something useful, and it will happen if you let it. It’s just not the way most of our minds work to just “go with it”. This is exactly as it was presented to me when I began as a white belt many, many, hours ago.
Similarly for most of us working in the visual arts, seeing something as both flat (the piece of paper) and as having depth at the same time (the illusion of drawing) does not come naturally. Anyone can see with their eyes intuitively the illusion of depth and understand what it is they’re seeing as a representation, but understanding how to make it so is another matter entirely, one that requires the duality of flat and deep to be understood on a conscious level, because art is created consciously.
Getting beyond both of these obstacles, and obstacles to understanding in general, have a good deal to do with us getting over the idea that we know by intuition, instinct, and dare I say by even our own personal experience, the best way to proceed, because there’s always more than one way to skin a cat, as the saying goes.
There’s one other thing that both the visual arts and Aikido share: you can know some things, but never to the point where you can say it’s ever truly ‘enough’.
Keep it humble.
Today I have need to create a texture map for a planet to be rendered in Maya and since I’ve never done that before, I figured I would be remiss if I didn’t at least invite the rest of you along for the ride.
What I was looking for was a way to build my own planet with its own unique land and ocean formations. Luckily there are a few resources already out there. A little Google-fu led me to a site which mentioned in passing the name:” An Advanced Guide to planet creation” which, as titled gives a pretty good primer for getting started in the process without being so restrictive as to suggest that this method was the end-all method. Some of these techniques I already knew of just from years of playing with Photoshop.
Then I recalled this video I watched last month on some rather unique ways one can use the new content-aware fill feature in Photoshop and that’s when the epiphany hit me. Because now with a few satellite images swiped from NASA’s website, one can bring together the two aspects of what I consider to be really good creative art: Abstract design and a grounding in reality.
But I’m getting ahead of myself a little bit. Let’s start with the following image:
As you can see here, I’ve chosen to start with a background that isn’t a topological reference at all, but a grungy texture I had in my personal image library. It’s close enough yet alien enough to serve as a good starting point.
Then, I went and grabbed one those NASA satellite topological photos of the Grand Canyon and put it on its own layer, shrunk it down and then proceeded to use the lasso tool to pick off a general area I wanted to apply the canyon’s texture to thus:
And within a couple of trial-and-error attempts with the content aware fill, that space was filled with the following pixels:
Now if I choose to, I can get rid of the original square-cropped canyon part, but that may not be advisable just yet because simply by keeping that selection and using new-layer-via-cut, I can repeat the process over and over again sporadically, each time with a slightly different generation of canyon texture and shape. To put a cap on this iteration however, we’ll just blend the results by altering the blend mode and opacity for the layer.
Even though I started out by using a base texture, I didn’t have to. Were I to say, want to create continents of various shapes and dimensions, all I need do is create a jigsaw puzzle of selection areas and as long as the samples I’ve imported are located somewhere in the same layer, using “fill content aware” will create a texture in the shape of the selection you’ve made, for land or sea (though I recommend creating the land portions first and using images of coastlines for oceans so that the fill command has some point of reference for creating the fictional coastline). That’s the long of it. The short of it is this:
Followed by this:
Maintaining open horizons: The key to finding answers isn’t to always look in the box marked “answers”.
I’ve been working in CGI rendering methods for approximately two years, creating a handful of characters and each time the approach in modeling has been slightly different, but almost all of them started out similarly. I’ve come to accept this as part of the territory when learning anything new and potentially complicated. Textbook examples for modeling something like say, a face or a head will only usually take you far enough just to get your feet wet, unless you only find yourself creating the same kind of character each and every time.
My ‘book of choice’ at the time illustrated a method for creating a face by creating the mouth, nose, ears and eyes as separate entities and connecting them afterwards. This works fine as long as you want to model a realistic human face. But what if you wanted to create faces like these?
Well, suddenly that method doesn’t seem so sufficient. How do I attach the mouth structure to the nose structure when they’re integrated by design? The book illustrated how to go about the business of crafting each one separately, not together. Having nothing else to go on at the time, I muddled through by pushing and pulling the mesh into the ‘right’ shape and produced faces much like you see in the top image. And while there’s nothing obviously wrong at a first glance, when it comes time to move the mouth, the deformations become… difficult to manage. I don’t want to get bogged down in the details here but essentially it has to do with the flow of edges and polygons as they wrap around the model’s surface. Another problem was that the character I was modeling was based on my comic strip character which in turn had been composed mainly of soft shapes that lacked a certain amount of defined structure in the face and head.
Time travel to this past July when I started thinking about creating a new character (bottom image). In the period of time between the two, I ended up viewing some particularly helpful tutorials. Hand-drawn art tutorials. There was one in particular that addressed ways to stay on model by clearly envisioning the planes of the face and consequently adding the structure that my previous work had so sorely lacked. It wasn’t until I started applying that knowledge in the hand-drawn design of the character that the thought occurred that perhaps one could model the head and face as a series of very low-res planes (basically one polygon per plane), merging them and then going into the requisite detail. You can see the effect most clearly in the mesh between the bridge of the nose and the corner of the mouth. As it turns out, using planes rather than parts provided a level of control over that tricky area of the face that develops in these kinds of muzzled characters, as it ends up dictating how well the face will wrinkle, fold and deform when it comes time to animate.
There are broader implications here which will become a recurring theme on this blog: Finding solutions is not unlike good design. The answers do not necessarily come from looking in the obvious places.