Archive for February, 2012

Location and large-scale set design, a novice’s journey.

I have several ideas on various burners going at any moment.  While it’s great to have several ideas you find worthy enough to have your attention on, it makes organizing all of them a task that makes herding cats a promising enterprise.
To digress, though, I have been envisioning the kernel of a story that has just recently begun to take shape.  What has really developed most recently,  was the location and the ‘look’ of the setting where I intend to set the story.  It came about relatively quickly, with the elements and how they’d be expressed falling into place.

It’s a sprawling, wide-open setting, inspired by visits of my own to the desert southwestern states, but amped-up on steroids.  Since I have the software, I can conceptualize my ideas in virtual, three dimensional space.  The scale of the setting however, is the largest I’ve ever attempted.   A setting that spans literally tens of miles is a far cry from a small room, and while I’m well aware that I could simply scale the model down, I will always wonder in the back of my head whether or not my virtual camera would be capturing the image as it would be if it actually existed in the real world.  Philosophically, the difference may be moot to some artists.  It’s a fantasy setting so why not design it however you want it to, scale included?   The CGI world is entirely of your creation so control all the aspects you can, right?

But instead, I’ve chosen to use literal scaling as a willingly accepted artistic restraint.  It will impose certain restrictions upon me, but it will simultaneously remove a variable that my mind might otherwise continually have to revisit by wondering what something should look like vs. what it will look like because I’ve set a principle as a constant from the outset.  Something that sounds restrictive, is liberating.  I can still use real-world points of reference, without having to go to absurdly literal.  Google and wikipedia are ready sources of proportion and scale for real-life features, so I can ballpark things without having to fuss over how “right” they should or shouldn’t look.  If things need to be adjusted and fine-tuned later, they can be, like scaling individual elements for compositional purposes.

The main feature of the scene is a massive desert mesa, 23 miles long and 17 miles wide, with a secondary, a smaller but further elevated plateau will play host to a large metropolis while the larger and lower plateau is more agricultural in nature.  All of this is surrounded by dry,  desert landscape.  Like an island in a dry ocean.  Can’t help but to notice, real life geological features like Uluru (formerly Ayers Rock) in Australia come to mind as well.  That kind of feature, known as an inselberg, combined with a traditional mesa, with a scale closer to those of features found on the ocean bottom (which can be much more massive in scale) probably describes it best.  And having that in mind will help later on, as real pictures of Ularu and surrounding features, as well as some others of say, Monument Valley, will help inform artistic choices made later on.

I started with the mesa itself, using a reference from google maps and satellite images, I came up with a general shape, modified it slightly for my own purposes, and modeled it roughly using some of the techniques outlined in previous entries.  I added two more basic elements to the setting:  to help start giving a sense of scale, I designed one building as a centerpiece for the metropolis (which itself is about a 1.5x the height of the current world-record setting Burj-Khalifa – that’s the one featured in last year’s Mission Impossible film), and then I added a surface cut from a sphere with the mean radius of the Earth itself, and cut it down.  At these scales, the curvature of the Earth might actually play a role, and it took little or no extra time to find out how to specify that detail.
Here’s the result, visually:

 

That tiny gray speck on the upper mesa?  That’s the Burj-Khalifa sized structure.  The rendering camera (the equivalent of a standard 35mm) had to be placed a staggering 30 miles away from the subject to fit it in the frame.  That’s the scale.
Rendering at this scale posed a few issues.  I chose to use raytraced shadowing, but kept the simplicity of a spotlight-lighting by placing it at a great distance away, simulating directional lighting.  Other than that, I changed up the camera’s clipping planes, putting the near clipping plane at 5 units and extending (by necessity) the far clipping plane to an absurdly large number, to make sure that these features which are at great distances away, will still show up in the render.  Finally, and most important for quality, under ray-tracing in the render settings, I bumped up the Trace Bias to 250 to eliminate rendering artifacts which showed up on the mesa object.  Of particular note, for this project I am using Renderman for Maya as my renderer of choice.  Settings for this feature may vary when using raytracing for another renderer, such as MentalRay, for example.

I’ll add some more features in the next entry to help develop scale and depth, but for now, I’ll simply change the tint of the primary light to better reflect a sunset, and lower the angle a little more as well:

 

And that’s where I’ll leave it for this week.  In future weeks, we’ll hopefully continue to see how the decisions I’ve made here, impact future ones down the line.

Advertisements

, , , , , ,

Leave a comment

That’s deep.

Awhile back, I started to occasionally render out images in Maya with a z-depth version intended for adding a depth of field effect in post-production with Photoshop.  There are a couple of different options in Photoshop, which utilize the black/white/gray information in the alpha channel to calculate how far away from the ‘camera plane’ the depth of field is most in focus.  One, is the filter native to Photoshop, under Filters>Blur>Lens Blur, and the other is a proprietary add-on I’ve discovered made by Frischluft called Lenscare.  They each have their strengths and weaknesses, but for the purpose of this entry, I’ll stick with the native Photoshop version.
I started wondering to myself, what it would look like to apply the filter to achieve a sort of similar effect to hand drawn art, or photographs, as an additional method to draw the attention of the eye to where you want it to be.

 

Here’s a drawing I did last year, and it’s oriented in a turned position, so it seemed appropriate to use this one for a test of this idea.  So I proceeded to select the white space around the horse with the magic wand tool, refine the edges of the selection, inverted the selection so it highlighted the positive rather than the negative space, and moved over to the channels palette where I created a new alpha channel.

So, knowing that the lens blur filter utilizes gradations of black and white to delineate depth, with white being near and black being far, then for an abbreviated and quick design, let’s create a black/white gradient from left to right, so that the horse’s head and forelegs are brought ‘nearer’ to the viewer.

Now, going back to the RGB Layers palette, load the newly created alpha channel as your selection, and use the Lens Blur filter.  The following window pops up, allowing you to adjust the location of the focal distance and the amount of blur.

 

And that’s pretty much it.  With a little more effort on applying the lights and darks in the alpha channel, an even better approximation could be reached.  Or, a diorama look could be applied by using only solid shapes of varying shades of gray to different objects in the work.

, , , , ,

Leave a comment

Tell me if you’ve seen this one before…

Oh joy, it’s time to rig faces in Maya.

It’s not something I look forward to, and that probably has something to do with the fact that the first time I went through these motions, I took a very literal path from point A to point B, in which I used wire tools, and cluster deformers to achieve facial movement.

But as with most subjects on this blog, I’m not satisfied with that method, as it’s tedious and time consuming, so this time I’m going to use Blend shapes, which can be equally torturous but for which I’m introducing an intermediary step linking wire deformers and Blend shapes using lattice deformers and cluster deformers to get there.

Here’s my assumed start position:

 

We’re going to start with the mouth and create a wire deformer for it, so I select the appropriate poly-edges around the mouth, and use the Modify>Convert>Polygon Edges to Curve to create the controlling wire.

Like this...

 

Which we proceed to smooth (Edit Curves>Smooth Curve) and make a duplicate of, that we move off to the side.

 

Go ahead and create a new Wire Deformer from the Create Deformers menu using the the original curve and the object with the face you intend to animate.  Now I know I said that we’re using other deformers as an intermediate to get to the Blend Shape deformers, but we’re going to actually create the Blend shape node now, using the two existing curves as the basis.

 

 

Hey, they're linked, just like they should be, and scaled by the Blend shape node, which is what we're looking for in terms of general behavior.

 

This is good, but the curve has so many control points that adjusting them for each shape will be almost as much of a pain as if we were adjusting verts by hand on the model.  We could weed them out, but the fit would be sacrificed, not to mention that we’d have to delete the same CV’s on both curves in the same order to keep the Blend shape working as intended.
Rather than get bogged down in all that, create a lattice deformer around the top and bottom CV’s respectively.  For demonstration purposes, I’ve just done the top set here for clarity.


The advantage of lattice deformers is that they take high density curves and polys and smooth the deformation over a larger area.  We can control those now by manipulating clusters we can attach to the lattice points as seen in the second of two images above.

 

Here's a sample manipulating the Cluster as a control for deformation.

 

Let’s say this is the expression we want for our next blend shape.  Duplicate the secondary curve in its’ deformed shape and add that as your next blend shape target.  A new target with slider shows up in the Blend Shape window.  Sliding it produces a duplicate of the exact same deformation that you got a moment ago manipulating the cluster.  Another benefit of the cluster deformer, is the ability to reset its’ position back to ‘zero’, over and over again, repeatedly creating new Blend Shape targets, and so on and so forth.

 

Lastly,  it should be noted that Blend Shape deformer nodes, as long as they are defined as “Local” when created, are free to be attached directly under the character or skeleton’s node without having to account for double transformations, a tedious step involved in most other types of  animation control rigs.  Just remember to parent both the wire and the baseWire shapes to the appropriate part of the anatomy and/or rig, and the blend shape should still work as intended.

 

, , , , ,

Leave a comment