Posts Tagged CGI

Location and large-scale set design, a novice’s journey.

I have several ideas on various burners going at any moment.  While it’s great to have several ideas you find worthy enough to have your attention on, it makes organizing all of them a task that makes herding cats a promising enterprise.
To digress, though, I have been envisioning the kernel of a story that has just recently begun to take shape.  What has really developed most recently,  was the location and the ‘look’ of the setting where I intend to set the story.  It came about relatively quickly, with the elements and how they’d be expressed falling into place.

It’s a sprawling, wide-open setting, inspired by visits of my own to the desert southwestern states, but amped-up on steroids.  Since I have the software, I can conceptualize my ideas in virtual, three dimensional space.  The scale of the setting however, is the largest I’ve ever attempted.   A setting that spans literally tens of miles is a far cry from a small room, and while I’m well aware that I could simply scale the model down, I will always wonder in the back of my head whether or not my virtual camera would be capturing the image as it would be if it actually existed in the real world.  Philosophically, the difference may be moot to some artists.  It’s a fantasy setting so why not design it however you want it to, scale included?   The CGI world is entirely of your creation so control all the aspects you can, right?

But instead, I’ve chosen to use literal scaling as a willingly accepted artistic restraint.  It will impose certain restrictions upon me, but it will simultaneously remove a variable that my mind might otherwise continually have to revisit by wondering what something should look like vs. what it will look like because I’ve set a principle as a constant from the outset.  Something that sounds restrictive, is liberating.  I can still use real-world points of reference, without having to go to absurdly literal.  Google and wikipedia are ready sources of proportion and scale for real-life features, so I can ballpark things without having to fuss over how “right” they should or shouldn’t look.  If things need to be adjusted and fine-tuned later, they can be, like scaling individual elements for compositional purposes.

The main feature of the scene is a massive desert mesa, 23 miles long and 17 miles wide, with a secondary, a smaller but further elevated plateau will play host to a large metropolis while the larger and lower plateau is more agricultural in nature.  All of this is surrounded by dry,  desert landscape.  Like an island in a dry ocean.  Can’t help but to notice, real life geological features like Uluru (formerly Ayers Rock) in Australia come to mind as well.  That kind of feature, known as an inselberg, combined with a traditional mesa, with a scale closer to those of features found on the ocean bottom (which can be much more massive in scale) probably describes it best.  And having that in mind will help later on, as real pictures of Ularu and surrounding features, as well as some others of say, Monument Valley, will help inform artistic choices made later on.

I started with the mesa itself, using a reference from google maps and satellite images, I came up with a general shape, modified it slightly for my own purposes, and modeled it roughly using some of the techniques outlined in previous entries.  I added two more basic elements to the setting:  to help start giving a sense of scale, I designed one building as a centerpiece for the metropolis (which itself is about a 1.5x the height of the current world-record setting Burj-Khalifa – that’s the one featured in last year’s Mission Impossible film), and then I added a surface cut from a sphere with the mean radius of the Earth itself, and cut it down.  At these scales, the curvature of the Earth might actually play a role, and it took little or no extra time to find out how to specify that detail.
Here’s the result, visually:

 

That tiny gray speck on the upper mesa?  That’s the Burj-Khalifa sized structure.  The rendering camera (the equivalent of a standard 35mm) had to be placed a staggering 30 miles away from the subject to fit it in the frame.  That’s the scale.
Rendering at this scale posed a few issues.  I chose to use raytraced shadowing, but kept the simplicity of a spotlight-lighting by placing it at a great distance away, simulating directional lighting.  Other than that, I changed up the camera’s clipping planes, putting the near clipping plane at 5 units and extending (by necessity) the far clipping plane to an absurdly large number, to make sure that these features which are at great distances away, will still show up in the render.  Finally, and most important for quality, under ray-tracing in the render settings, I bumped up the Trace Bias to 250 to eliminate rendering artifacts which showed up on the mesa object.  Of particular note, for this project I am using Renderman for Maya as my renderer of choice.  Settings for this feature may vary when using raytracing for another renderer, such as MentalRay, for example.

I’ll add some more features in the next entry to help develop scale and depth, but for now, I’ll simply change the tint of the primary light to better reflect a sunset, and lower the angle a little more as well:

 

And that’s where I’ll leave it for this week.  In future weeks, we’ll hopefully continue to see how the decisions I’ve made here, impact future ones down the line.

, , , , , ,

Leave a comment

Tell me if you’ve seen this one before…

Oh joy, it’s time to rig faces in Maya.

It’s not something I look forward to, and that probably has something to do with the fact that the first time I went through these motions, I took a very literal path from point A to point B, in which I used wire tools, and cluster deformers to achieve facial movement.

But as with most subjects on this blog, I’m not satisfied with that method, as it’s tedious and time consuming, so this time I’m going to use Blend shapes, which can be equally torturous but for which I’m introducing an intermediary step linking wire deformers and Blend shapes using lattice deformers and cluster deformers to get there.

Here’s my assumed start position:

 

We’re going to start with the mouth and create a wire deformer for it, so I select the appropriate poly-edges around the mouth, and use the Modify>Convert>Polygon Edges to Curve to create the controlling wire.

Like this...

 

Which we proceed to smooth (Edit Curves>Smooth Curve) and make a duplicate of, that we move off to the side.

 

Go ahead and create a new Wire Deformer from the Create Deformers menu using the the original curve and the object with the face you intend to animate.  Now I know I said that we’re using other deformers as an intermediate to get to the Blend Shape deformers, but we’re going to actually create the Blend shape node now, using the two existing curves as the basis.

 

 

Hey, they're linked, just like they should be, and scaled by the Blend shape node, which is what we're looking for in terms of general behavior.

 

This is good, but the curve has so many control points that adjusting them for each shape will be almost as much of a pain as if we were adjusting verts by hand on the model.  We could weed them out, but the fit would be sacrificed, not to mention that we’d have to delete the same CV’s on both curves in the same order to keep the Blend shape working as intended.
Rather than get bogged down in all that, create a lattice deformer around the top and bottom CV’s respectively.  For demonstration purposes, I’ve just done the top set here for clarity.


The advantage of lattice deformers is that they take high density curves and polys and smooth the deformation over a larger area.  We can control those now by manipulating clusters we can attach to the lattice points as seen in the second of two images above.

 

Here's a sample manipulating the Cluster as a control for deformation.

 

Let’s say this is the expression we want for our next blend shape.  Duplicate the secondary curve in its’ deformed shape and add that as your next blend shape target.  A new target with slider shows up in the Blend Shape window.  Sliding it produces a duplicate of the exact same deformation that you got a moment ago manipulating the cluster.  Another benefit of the cluster deformer, is the ability to reset its’ position back to ‘zero’, over and over again, repeatedly creating new Blend Shape targets, and so on and so forth.

 

Lastly,  it should be noted that Blend Shape deformer nodes, as long as they are defined as “Local” when created, are free to be attached directly under the character or skeleton’s node without having to account for double transformations, a tedious step involved in most other types of  animation control rigs.  Just remember to parent both the wire and the baseWire shapes to the appropriate part of the anatomy and/or rig, and the blend shape should still work as intended.

 

, , , , ,

Leave a comment

Making the rounds, and coming to the point.

Alton Brown, of Food Network fame, has made a point repeatedly (and not without reason, I might add) that he loves multipurpose tools, and hates single-purpose ones.  And this is one of the reasons I love what Alton Brown does.  He’s a practical man.  Why expend energy and waste your resources on something that only has one obscure purpose?  I mean, if the purpose is something that you’ll find yourself doing really, really often anyway, that’s one thing, but who really needs the Popiel Inside-The-Egg-Scrambler?  That’s what I have a whisk for.  And the whisk helps me when I make pancake and waffle batter as well.

That bi-rail tool I keep bringing up as it turns out, is good at more than four-sided surfaces.  What if you have a surface that has no clear ‘corners’ to it?  What if you want more than just the outermost edge loop in a shape to be concentric upon itself?  What if you have a shape that comes to a point?  Often muscle groups will do this, and if your modeling a more detailed figure, then you really might need for that muscle group to be well-defined accurately as coming to a triangular point.  I’ll address both these situation in today’s entry.

Here’s the basic setup:  Create a Nurbs circle, duplicate it and scale it up or down to fit inside or outside of the first, your choice.  Selecting a curve point slightly offset from curve origin, detach and eliminate a small arc from the same area on both circles.  Connect the two pairs of endpoints with new EP curves, and snap a few more EP curves spanning between the two pairs at whatever points you find will be key in defining shapes later.  you should have something that looks like a broken wheel with a few spokes.  I’ve made mine so that the two circles are offset in height as well, so it looks more like a cone.

Next, let’s move to the bi-rail-3 tool, as we’ve done before, select our U and V polygon resolution (in this case, the U direction is the one that circumvents the shape while the V direction is the one that is along the radius of the circular shape), enter the tool, select the spokes, hit enter, select the rails, hit enter again, and you get your basic shape.

Note, leave history on here, if you want to use the curves to continue to smoothly shape the polygons before doing your final edits.  Just remember to leave the CV’s that touch other curves alone, or the mesh will disappear, because you’re using history, the curves that define it no longer intersect.


Now there are numerous ways to fill the gap and the hole in middle.  In my case, I used the Bridge tool in the Edit Mesh menu, and a combination of Append to Polygon tool and the Split Polygon tool in the same to create a radial edge flow in one direction, and the desired continuous one in the circumferential direction.

You may or may not have to introduce a new string of edges radiating out from the center in order for the pattern you want to work out evenly, but it likely won’t be more than one.

The result here looks remarkably like a half-dome that you could just as easily produce by taking a stock sphere and cutting it in half.  True, but again, here you can conform the final boundary that your shape will fill before you create the mesh.  Can’t emphasize that enough.  The push-pull-tweak is all but eliminated.  Stock shapes are good for making stock shapes, and little else, I’ve discovered.

Now on to the triangular shape.  We’ll define our boundary with only three curves this time.  I’ve arranged them thus:

Almost like the shape of a deltoid,  right?  What we’re going to attempt here, is to create a situation in which the polys converge to the point on the left side of the shape in the image above, and round out in the other two corners.  To control the direction, make sure the vertex where  you want the polys to converge, has its two respective curves’ direction aimed at each other (either both curves ends at the same point, or the starting point of both curves converge at this point.  (Head to head or tail to tail, never head-to-tail).

Using the Birail-1 tool, go about your business as usual, defining the curve that’s ‘away’ from the convergence point first, then selecting the two birails that form its path.

Be aware that the convergence point only appears to end in one vertex, but there are actually several there, and they should be merged before continuing.

There are any number of ways you can now conduct the flow of poly’s around this kind of object.  I ended up with this configuration with only a few adjustments:

The corner vertices I creased to emphasize the triangular shape.

Big lesson this week: Get to love versatility, and keep an eye out for things that let you achieve it.

, , ,

Leave a comment

The realism trap, and keeping your vision.

This week, I have a subject that I have to constantly challenge myself with because it is so easy for me to slip back into the bad habit of becoming a slave to realism.

I needed a good logo for this blog, and I had a good concept sketched out on digital paper for just that purpose.  I decided I’d use Maya to realize the end result, and so I went about the task of creating the model.  Visually, anything called the Inkwell Distillery should have some element of fire involved, and so I modeled stylized ‘flames’ to fill that void, but the test renders just weren’t showing potential.

Opting to experiment with Maya fluids to generate fire (more realistically) I started getting some fantastic looking flames, but for some odd reason this caused the rendering process to shut down when all the other visual elements were brought online.  This is immensely frustrating, but this is also where I started to lose focus on the end goal because…

I insisted that it should work and that I needed to find a solution to make it render because now the fire had to be realistic!  Had to!  I mean, look how cool that looks, right?

fire

The fact that we can make stuff like this in seconds on our home PC's these days means we have to use it the way people use lens flares in Photoshop, right?

Forgetting that design isn’t about realism, but on sending a clear visual message  is a big stumbling block for a lot of illustrators and artists starting out.  Even knowing this myself for quite some time I still need to be reminded constantly of my goals.  If I don’t, I’ll get lost in a detail, never to find my way out, like I was starting to do here.

So what ended up happening?  Putting coolness (and realism) aside, I went back to the stylized version where I should have continued experimenting  in the first place.  And so I did end up finding a great solution involving a more complex shading network for the fire, but one that ended up rendering just fine and actually looking more like it belonged with all the other elements.

Like this...

But the end message to take out of this experience is that while experimentation is excellent,  you still have to keep it on the reservation, so to speak.  Getting lost in the details, and especially getting lost in the pursuit of realism over design, is a good way to never produce a finished work.  Or to produce something that looks like it came from two different universes.

This time I managed to shake myself out of it before getting discouraged of the project.  Next time I’ll think better of it before digging that hole in the first place.  Hopefully…

,

Leave a comment

Maintaining open horizons: The key to finding answers isn’t to always look in the box marked “answers”.

I’ve been working in CGI rendering methods for approximately two years, creating a handful of characters and each time the approach in modeling has been slightly different, but almost all of them started out similarly.  I’ve come to accept this as part of the territory when learning anything new and potentially complicated.  Textbook examples for modeling something like say, a face or a head will only usually take you far enough just to get your feet wet, unless you only find yourself creating the same kind of character each and every time.

My ‘book of choice’ at the time illustrated a method for creating a face by creating the mouth, nose, ears and eyes as separate entities and connecting them afterwards.  This works fine as long as you want to model a realistic human face.  But what if you wanted to create faces like these?

head comparison

Well, suddenly that method doesn’t seem so sufficient.  How do I attach the mouth structure to the nose structure when they’re integrated by design?   The book illustrated how to go about the business of crafting each one separately, not together.  Having nothing else to go on at the time, I muddled through by pushing and pulling the mesh into the ‘right’ shape and produced faces much like you see in the top image.  And while there’s nothing obviously wrong at a first glance, when it comes time to move the mouth, the deformations become… difficult to manage.  I don’t want to get bogged down in the details here but essentially it has to do with the flow of edges and polygons as they wrap around the model’s surface.  Another problem was that the character I was modeling was based on my comic strip character which in turn had been composed mainly of soft shapes that lacked a certain amount of defined structure in the face and head.
Time travel to this past July when I started thinking about creating a new character (bottom image).  In the period of time between the two, I ended up viewing some particularly helpful tutorials.  Hand-drawn art tutorials.  There was one in particular that addressed ways to stay on model by clearly envisioning the planes of the face and consequently adding the structure that my previous work had so sorely lacked.  It wasn’t until I started applying that knowledge in the hand-drawn design of the character that the thought occurred that perhaps one could model the head and face as a series of very low-res planes (basically one polygon per plane), merging them and then going into the requisite detail.  You can see the effect most clearly in the mesh between the bridge of the nose and the corner of the mouth.  As it turns out, using planes rather than parts provided a level of control over that tricky area of the face that develops in these kinds of muzzled characters, as it ends up dictating how well the face will wrinkle, fold and deform when it comes time to animate.

There are broader implications here which will become a recurring theme on this blog:  Finding solutions is not unlike good design.  The answers do not necessarily come from looking in the obvious places.

, , ,

1 Comment