Archive for category CGI
I have several ideas on various burners going at any moment. While it’s great to have several ideas you find worthy enough to have your attention on, it makes organizing all of them a task that makes herding cats a promising enterprise.
To digress, though, I have been envisioning the kernel of a story that has just recently begun to take shape. What has really developed most recently, was the location and the ‘look’ of the setting where I intend to set the story. It came about relatively quickly, with the elements and how they’d be expressed falling into place.
It’s a sprawling, wide-open setting, inspired by visits of my own to the desert southwestern states, but amped-up on steroids. Since I have the software, I can conceptualize my ideas in virtual, three dimensional space. The scale of the setting however, is the largest I’ve ever attempted. A setting that spans literally tens of miles is a far cry from a small room, and while I’m well aware that I could simply scale the model down, I will always wonder in the back of my head whether or not my virtual camera would be capturing the image as it would be if it actually existed in the real world. Philosophically, the difference may be moot to some artists. It’s a fantasy setting so why not design it however you want it to, scale included? The CGI world is entirely of your creation so control all the aspects you can, right?
But instead, I’ve chosen to use literal scaling as a willingly accepted artistic restraint. It will impose certain restrictions upon me, but it will simultaneously remove a variable that my mind might otherwise continually have to revisit by wondering what something should look like vs. what it will look like because I’ve set a principle as a constant from the outset. Something that sounds restrictive, is liberating. I can still use real-world points of reference, without having to go to absurdly literal. Google and wikipedia are ready sources of proportion and scale for real-life features, so I can ballpark things without having to fuss over how “right” they should or shouldn’t look. If things need to be adjusted and fine-tuned later, they can be, like scaling individual elements for compositional purposes.
The main feature of the scene is a massive desert mesa, 23 miles long and 17 miles wide, with a secondary, a smaller but further elevated plateau will play host to a large metropolis while the larger and lower plateau is more agricultural in nature. All of this is surrounded by dry, desert landscape. Like an island in a dry ocean. Can’t help but to notice, real life geological features like Uluru (formerly Ayers Rock) in Australia come to mind as well. That kind of feature, known as an inselberg, combined with a traditional mesa, with a scale closer to those of features found on the ocean bottom (which can be much more massive in scale) probably describes it best. And having that in mind will help later on, as real pictures of Ularu and surrounding features, as well as some others of say, Monument Valley, will help inform artistic choices made later on.
I started with the mesa itself, using a reference from google maps and satellite images, I came up with a general shape, modified it slightly for my own purposes, and modeled it roughly using some of the techniques outlined in previous entries. I added two more basic elements to the setting: to help start giving a sense of scale, I designed one building as a centerpiece for the metropolis (which itself is about a 1.5x the height of the current world-record setting Burj-Khalifa – that’s the one featured in last year’s Mission Impossible film), and then I added a surface cut from a sphere with the mean radius of the Earth itself, and cut it down. At these scales, the curvature of the Earth might actually play a role, and it took little or no extra time to find out how to specify that detail.
Here’s the result, visually:
That tiny gray speck on the upper mesa? That’s the Burj-Khalifa sized structure. The rendering camera (the equivalent of a standard 35mm) had to be placed a staggering 30 miles away from the subject to fit it in the frame. That’s the scale.
Rendering at this scale posed a few issues. I chose to use raytraced shadowing, but kept the simplicity of a spotlight-lighting by placing it at a great distance away, simulating directional lighting. Other than that, I changed up the camera’s clipping planes, putting the near clipping plane at 5 units and extending (by necessity) the far clipping plane to an absurdly large number, to make sure that these features which are at great distances away, will still show up in the render. Finally, and most important for quality, under ray-tracing in the render settings, I bumped up the Trace Bias to 250 to eliminate rendering artifacts which showed up on the mesa object. Of particular note, for this project I am using Renderman for Maya as my renderer of choice. Settings for this feature may vary when using raytracing for another renderer, such as MentalRay, for example.
I’ll add some more features in the next entry to help develop scale and depth, but for now, I’ll simply change the tint of the primary light to better reflect a sunset, and lower the angle a little more as well:
And that’s where I’ll leave it for this week. In future weeks, we’ll hopefully continue to see how the decisions I’ve made here, impact future ones down the line.
Oh joy, it’s time to rig faces in Maya.
It’s not something I look forward to, and that probably has something to do with the fact that the first time I went through these motions, I took a very literal path from point A to point B, in which I used wire tools, and cluster deformers to achieve facial movement.
But as with most subjects on this blog, I’m not satisfied with that method, as it’s tedious and time consuming, so this time I’m going to use Blend shapes, which can be equally torturous but for which I’m introducing an intermediary step linking wire deformers and Blend shapes using lattice deformers and cluster deformers to get there.
Here’s my assumed start position:
We’re going to start with the mouth and create a wire deformer for it, so I select the appropriate poly-edges around the mouth, and use the Modify>Convert>Polygon Edges to Curve to create the controlling wire.
Go ahead and create a new Wire Deformer from the Create Deformers menu using the the original curve and the object with the face you intend to animate. Now I know I said that we’re using other deformers as an intermediate to get to the Blend Shape deformers, but we’re going to actually create the Blend shape node now, using the two existing curves as the basis.
This is good, but the curve has so many control points that adjusting them for each shape will be almost as much of a pain as if we were adjusting verts by hand on the model. We could weed them out, but the fit would be sacrificed, not to mention that we’d have to delete the same CV’s on both curves in the same order to keep the Blend shape working as intended.
Rather than get bogged down in all that, create a lattice deformer around the top and bottom CV’s respectively. For demonstration purposes, I’ve just done the top set here for clarity.
The advantage of lattice deformers is that they take high density curves and polys and smooth the deformation over a larger area. We can control those now by manipulating clusters we can attach to the lattice points as seen in the second of two images above.
Let’s say this is the expression we want for our next blend shape. Duplicate the secondary curve in its’ deformed shape and add that as your next blend shape target. A new target with slider shows up in the Blend Shape window. Sliding it produces a duplicate of the exact same deformation that you got a moment ago manipulating the cluster. Another benefit of the cluster deformer, is the ability to reset its’ position back to ‘zero’, over and over again, repeatedly creating new Blend Shape targets, and so on and so forth.
Lastly, it should be noted that Blend Shape deformer nodes, as long as they are defined as “Local” when created, are free to be attached directly under the character or skeleton’s node without having to account for double transformations, a tedious step involved in most other types of animation control rigs. Just remember to parent both the wire and the baseWire shapes to the appropriate part of the anatomy and/or rig, and the blend shape should still work as intended.
Alton Brown, of Food Network fame, has made a point repeatedly (and not without reason, I might add) that he loves multipurpose tools, and hates single-purpose ones. And this is one of the reasons I love what Alton Brown does. He’s a practical man. Why expend energy and waste your resources on something that only has one obscure purpose? I mean, if the purpose is something that you’ll find yourself doing really, really often anyway, that’s one thing, but who really needs the Popiel Inside-The-Egg-Scrambler? That’s what I have a whisk for. And the whisk helps me when I make pancake and waffle batter as well.
That bi-rail tool I keep bringing up as it turns out, is good at more than four-sided surfaces. What if you have a surface that has no clear ‘corners’ to it? What if you want more than just the outermost edge loop in a shape to be concentric upon itself? What if you have a shape that comes to a point? Often muscle groups will do this, and if your modeling a more detailed figure, then you really might need for that muscle group to be well-defined accurately as coming to a triangular point. I’ll address both these situation in today’s entry.
Here’s the basic setup: Create a Nurbs circle, duplicate it and scale it up or down to fit inside or outside of the first, your choice. Selecting a curve point slightly offset from curve origin, detach and eliminate a small arc from the same area on both circles. Connect the two pairs of endpoints with new EP curves, and snap a few more EP curves spanning between the two pairs at whatever points you find will be key in defining shapes later. you should have something that looks like a broken wheel with a few spokes. I’ve made mine so that the two circles are offset in height as well, so it looks more like a cone.
Next, let’s move to the bi-rail-3 tool, as we’ve done before, select our U and V polygon resolution (in this case, the U direction is the one that circumvents the shape while the V direction is the one that is along the radius of the circular shape), enter the tool, select the spokes, hit enter, select the rails, hit enter again, and you get your basic shape.
Note, leave history on here, if you want to use the curves to continue to smoothly shape the polygons before doing your final edits. Just remember to leave the CV’s that touch other curves alone, or the mesh will disappear, because you’re using history, the curves that define it no longer intersect.
Now there are numerous ways to fill the gap and the hole in middle. In my case, I used the Bridge tool in the Edit Mesh menu, and a combination of Append to Polygon tool and the Split Polygon tool in the same to create a radial edge flow in one direction, and the desired continuous one in the circumferential direction.
The result here looks remarkably like a half-dome that you could just as easily produce by taking a stock sphere and cutting it in half. True, but again, here you can conform the final boundary that your shape will fill before you create the mesh. Can’t emphasize that enough. The push-pull-tweak is all but eliminated. Stock shapes are good for making stock shapes, and little else, I’ve discovered.
Now on to the triangular shape. We’ll define our boundary with only three curves this time. I’ve arranged them thus:
Almost like the shape of a deltoid, right? What we’re going to attempt here, is to create a situation in which the polys converge to the point on the left side of the shape in the image above, and round out in the other two corners. To control the direction, make sure the vertex where you want the polys to converge, has its two respective curves’ direction aimed at each other (either both curves ends at the same point, or the starting point of both curves converge at this point. (Head to head or tail to tail, never head-to-tail).
Using the Birail-1 tool, go about your business as usual, defining the curve that’s ‘away’ from the convergence point first, then selecting the two birails that form its path.
There are any number of ways you can now conduct the flow of poly’s around this kind of object. I ended up with this configuration with only a few adjustments:
Big lesson this week: Get to love versatility, and keep an eye out for things that let you achieve it.
I’m going to build on the last two weeks’ worth of discoveries with a bit of a slide show indicating some of the variations of this new-found technique as I went through and finished the model’s head. Combining these with what we’ve gone through already, it’s starting to form a picture of a comprehensive skillset that can efficiently model entire figures. If I can compare it to anything, I’d compare it to custom plate fabrication, where you break down your character into a set of plates or shapes, and those plates have to be fabricated to fit the number of polys along each edge. So any way you can arrive at what the parameters of each plate are, then you can apply the techniques already discussed.
For example, as some of the illustrations below will demonstrate, even if you can landmark key points of a surface in three-dimensional space say… using Locators for example… you can string curves through them forming the boundaries you need to proceed.
But what if, for example, the rest of the body you’re going to attach this head to only has 16 polys around the neck, while your head is being modeled at a higher poly count? This isn’t all that unexpected a situation, as the face often is modeled at higher resolution for the sake of accommodating the smaller features of the face as well as providing smoother animation for the subtleties of facial movements. The rest of the body isn’t as nuanced as this, except perhaps for the extremities of the hands and feet. In any case, the gap must be bridged, unless the parts behind clothing aren’t going to be modeled at all (binding multiple objects representing the emergent and visible parts of the model only). I ran into that situation here. I wanted to limit the back left quarter of the neck to a width of four polys. But this is what I had to work with:
If you look carefully, you can see already my thought process. I intend to take the last four polys and spread them out wider, but there isn’t a place for those remaining five polys to go, right?
Now the extra polys have been routed around the base of the neck rather than down the neck into the rest of the body. I did partition out the neck area to receive them too, but that’s a function of the principle above about sub-dividing areas.
I’ve modeled a couple more heads and a hand in the span of time betwween last week’s entry and today’s using these procedures and I have yet to run into a situation that using it, in conjunction with a little creative thinking, hasn’t been able to resolve. The hand I did already had the rest of the model waiting for it, but when I imported it in, I was confident that it would blend seamlessly with the rest, and it did. That’s not something I can say for all the character modeling work I’ve done thus far, but alas, I didn’t know then what I know now.
But what if you want a surface with a continuous, smooth, single edge loop of polys running around it, rather than a shape bounded by four separate edge loop flows?
That’s it for this week. If there’s more to be mined from this method that I can dig up, you’ll see it shared here. Until then, the subject may wander to other venues again…
In furtherance of last week’s entry, I decided to continue my pursuit of the birail tool in Maya, and it has been something of a revelatory experience. With the discovery of a couple additional tools used in combination with the birail, I now realize the potential to avoid much of the absolute mess that comes with building the model and having the polygons that you’re trying so hard to sew together only to find that they just do not line up, or have a huge discrepancy in the number of polys from one side to the other, forcing you to spend (oftentimes) hours figuring out how to route the edge loops and still have the model move and deform correctly as it should. With all the work at stake, you really don’t want to have to revisit the model’s construction. Here’s a workflow that might help you out if you find yourself in my position.
Before we start, here’s a list of the tools we’ll be using in this entry:
Modify–>Convert–>Polygon Edges to Curve
Create –> Bezier Curve Tool
Edit Curves –> Open/Close Curves
Edit Curves –>Detach Curves
Edit Curves –> Attach Curves
Edit Curves –> Cut Curves
Surfaces –> Birail 2 Tool
As the basis for this week’s example, I’m pulling one of my old characters out for a re-design. It will also be a test to see how effective the technique I outlined with a throwaway model in my last entry, with one that actually may have use for me in the future. Matt Wulf was the first comic character I created. As my drawing experience at that time was almost non-existent, he wasn’t that well designed, and even in his last incarnation, he, as well as the rest of the cast tended to lack visual structure, something I’ve come to appreciate more in the years between. Worst of all, he seemed generic. So here I’ve opened a Maya window and put in a new profile and front view of the re-designed head behind their respective cameras. Of special note, is the emphasis on lines that delineate structure in both views.
Last time, with the leg example, I used the basic side-front camera views to model profiles on the axis, but this meant I still had to reposition the curves for alignment afterward. To skip some of that this time, we’re going to start with a few construction planes. It should also be noted here, that in doing this, it’s essentially taking the lesson from my entry here and simply adding a little bit of planning, and different modeling tools.
So I’ll start with the construction planes. We’ll do the muzzle shape first.
One for the side, one for the front and one for the top. Making the side plane ‘live’, we can switch to the side view, and draw on the canted angle using your preferred method for curve creation (in this case, a bezier curve) thus:
Now, thinking about how high a resolution I’d like to maintain for the model, in this case, about two polys per inch, I see that the coverage on the projected plane surface is about 4×4 inches, so I’ll aim for a poly grid that’s 8×8. But we can’t use the birail tool like we want to until we have at least four curves whose ends all touch. Fortunately, Maya’s Edit Curves menu includes Detach Curves, which will break apart the single curve into multiples wherever we wish to do so, while leaving the ends of each in tangent with the others, as the birail tool requires. Choosing the points to break apart the curve becomes easier with practice, but the general idea is to imagine where the edge flow will merge with your next application of this cycle on a conjoining part of the anatomy. In this case, the shape almost suggests four sides, with corners at the bridge of the muzzle, the corner of the mouth, the tip below the nose, and… somewhere in the sweeping curve on the front of the muzzle? That’s going to take some judgement, but one clue is generally looking in the area of the curve apex. Choose these points by right clicking on the curve and selecting “Curve Point”. If you hold down shift now while clicking, you can select all four points at once, and then click on detach curves, giving us the following:Now to keep track of direction, I came up with my own convention for using the birail tool. That is, I always select top-bottom before left-right. This isn’t a law or even a rule, but just a good principle that you should probably have a convention that works for you and stick with it. There still may be times when you want to switch convention, because the quality of the mesh may differ substantially using the opposite convention. If you’re satisfied with the results of your regular convention though, then you’re good to go. The first pairing of curves you pick will be divided into the number in the “U” field, less one, while the second pairing of curves will be divided into the number in the “V” field, less one. The settings used for the birail tool are covered in the previous entry, for reference if you need it. Going on with my convention however, and applying the birail tool in using my convention yields:
And with a little application of the Sculpt Geometry tool set to “relax”…
I think now is as good a time as any to bring up a few troubleshooting points that I came across while refining this method. First, when creating your curves, you can use the Open/Close Curve tool to make sure you have a closed loop, which is great for saving you the trouble of snapping CV’s manually. However, be sure to remember that in doing this, the curve still has a discontinuity where the beginning and end of curve meets. When you begin detaching curves into components for the birail tool, this may result in a ‘fifth’ curve you didn’t plan on having, and it may be small enough in the viewport to easily miss and derail your birail, so to speak. It’s easily fixed by using the “Attach Curve” tool re-merge that extra strand with the nearest larger curve and we’re back to four curves again, like we should. Also, turn two-sided lighting off on all your viewports. It’ll help keep you aware of the direction all your surface normals are aiming, and letting you reverse normals as necessary to keep everything consistent.
Also of note, remember to ‘flush’ your history often. Some curve modification tools demand a clean object history to function properly (while others won’t work unless History is “on”, so you can’t necessarily turn it off completely either).
In any case, we’re ready to merge these two surfaces, so assuming you’re already familiar with how to do that, we’ll proceed from that point. Let’s select a row of faces on the first shell we created, because it’s the larger of the two (in poly count) and delete it. All that remains for a more natural blending here is to use either the Bridge tool, or the Append Polygon tool. The former is quicker, but often glitchy compared with the latter, so take that as you will.
Okay. Not too bad, but we’re not seeing much in the way of form yet. Let’s do the top of the muzzle next. I was going to use a construction plane here as with the other two surfaces, but on reflection, Matt’s nose is far too much like a ski-jump to use such a flat modeling surface effectively, so let’s look into using a makeshift Nurbs surface to use as a canvas and follow the contours more closely. I can still use the construction plane, but only as a place to create the dummy Nurbs surface. Note, that I don’t have to do anything in creating this nurbs surface other than make sure the shape and size is accurate to my eye. I’m not thinking about matching polys at all here, because I don’t have to any more, not until after we’ve drawn curves on this nurbs surface at least. That in itself is liberating.
Now I’ll take that little ski-jump there, and make it our live object. Then move to the front view where I’ll draw on it, following the contours of the bridge of the nose as closely as possible, and then bring it back up the mid-line. We can’t close the loop yet, however, because drawing a curve on a Nurbs surface the way we’ve done here makes the curve part of the Nurbs object, and we want the curve to be its own object. To do this, we go back to the Edit Curves menu and select the first item on the list, the Duplicate Surface Curves tool. Now that we’ve got the curve we need, we can delete the Nurbs object and close the curve to make it a loop. Then we proceed as we ordinarily would, detaching, and using the birail tool.
And as before, combining objects, deleting the appropriate number of edge polys, and bridging between the shells.
I get the feeling that if I were to just make a single birail surface out of this loop, it might end up being to ‘soft’ and button like. I want to add more structure, so I think I’ll do it in two passes, breaking down the single loop into two loops. The tactic I’m going to employ in choosing the locations to break the loop is twofold. I’m going to align those points as closely as possible to the perceived “turning edge” of the model where front becomes side or top, and two, line them up with the nearest vertex on the existing model. I’ll continue to re-attach and detach curves as I deem necessary to continue to create birails that sufficiently fill in the gaps and provide structure. I’ve numbered the edges as I reconfigure them to accommodate each successive birail in the following images…
In the last image in that string, you can see I pulled out the Insert Edge tool to accommodate the large polys that developed along the bridge of the nose.
Here’s a comparison shot to see where we’re at compared to the drawings.
The profile view is pretty much spot-on, though we can still see that in the front view, in the transition area between muzzle and cheekbone, there is some divergence. How I fix that will be evident when I pick this up in next week’s entry, where I investigate using these methods with solid stand-in shapes and intersections.
As a bit of a post-script to this entry, I have to wonder if anyone has come across this method of model-building yet, with so much potential to avoid the agonizing prospect of hammering away at vertices or individually building edge loop after edge-loop through extrusion. It’s just not a very intuitive artistic process. One of my fairly regular habits is searching for techniques and methods others have found in how they approach certain problems, and see if I can adapt them for what I’m doing. But on this particular subject, what I have found out there rarely ventures away from only the most rudimentary techniques. If anyone has any further ideas for refining or combining this method with others they’ve seen, I welcome all comments to that end.
As mentioned in my resources page, one of my most significant go-to books for drawing is the Force series, illustrating brilliantly how throughout living organisms, there is a rhythm, a path where metaphorical ‘force’ ricochets from one part of the body to the next. Keeping this in mind, I decided to see if there was a way to more directly translate or ‘sculpt’ in CGI modeling software (Maya, for the purposes of this entry) this concept, by starting with drawings that were produced using this method, but then arriving at a model by mimicking those methods using standard modelling tools in Maya.
Here’s where I started:
Just a very quickly sketched out leg. In retrospect, I could have pushed the forces a lot more, but you can still clearly see them sweep from the front thigh through the knee, into the back of the calf, into the heel.
Most modeling tutorials I’ve seen will either instruct the student to start with stock cylindrical objects and continue to push and pull vertices to match the geometry until you’re ready to peel back your eyelids from the sheer inanity, or create one edge loop at a time, bridging the new one with the previous. It’s an improvement, but a slight one.
I think, however, that I have found a way to more easily construct limbs and bodies for characters relying almost entirely on your sketches in the creation of the geometry rather than hammering stock geometry into your desired form. At the very least, it should minimize the amount of that kind of irritation.
Anyway, here we go…
I’ve imported the two views of the leg into Maya’s front and Side camera views respectively. In the Perspective view, that looks like this:
Now, in the Side and Front views, I use my choice of curve tools (in this case, a combination of EP curve and Bezier curve tools to trace the front and back side of the leg in the Side view, and the inside and outside profiles of the leg in the Front view. You’ll notice that they probably won’t line up the way they should, as the centerline of the limb is not necessarily plumb vertical the way the planes you’ve been drawing your curves on, are. With a few rotations and translations, you’ll get something that looks like this:
Also shown above are two loops that connect the tops and bottoms of the curves. These are important for the tool I ended up using to create the surfaces. Starting with the line towards the back (the heel side) and with curve-snap on, I used the EP curve tool to make the loops, and I moved in a counterclockwise (from above) direction in both. Consistency in creating the top and bottom loops is critical.
Next, I select the Birail 3+ tool from the Edit Curves menu and set it to the following settings:
Using the general tesselation setting here allows you to build the leg in four surface wedges that have the same number of polys along each edge and have them mostly line up closely enough that they can be merged more easily together in the following steps. Take note here! The number of polygons in the U and V directions will be one less than the number indicated in the initial tesselation controls. So if you want your limb to be 30 units long, enter 31 into the U field. You might also want to take note at this point of how many polygons you want your limb to be around as this will come in handy when modeling the torso and how many polys around the opening is for your limb to connect into. We’ll be doing this four times to bring the limb surface to a full 360 degrees before stitching all four together, so take the number in your V field, subtract one from it, and multiply by four. That will be the number to remember for attaching these limbs more easily later.
Starting again from the place where you started your loops, start the birail 3+ tool and follow its instructions, moving counterclockwise as before, one segment at a time until you’re at the last segment. Caution! Have two-sided lighting turned off here, so you can easily detect if the normals of the surfaces you’re creating in these steps are facing outward the way they should be! Also, in between each use of the birail 3+ tool, make absolutely sureyou remember to delete history on the surfaces.
Before you do the last quarter of the leg, Maya needs for the direction on the loops to be reversed (otherwise, it will span the paths opposite to the direction we need). I can’t say it enough, but make sure you’ve deleted the history on the other three surfaces you’ve already created or when you reverse the direction on the paths, it will affect your work, and when I say ‘affect’, I mean ‘make it look screwy’.
All right, once you’ve reversed the direction on the paths (Edit Curves—>Reverse Curve Direction) then you can finish off the last quarter by repeating the above process going clockwise this time.
We’re in the home stretch now. If you’re familiar with the basics of Maya, then you know how to combine objects, merge vertices, etc. It also can’t hurt to make sure once more that all the surface normals are facing outward before doing that.
Once that’s done, and with less than 30 seconds worth of smoothing and sculpting with the surface sculpting tools, here’s the final result, lit and rendered.
Now, you can tell this is still somewhat unrefined, but your starting shape is much closer to the end result than it would be using more traditional techniques. All the polys are quads, so they’ll deform quite well when animated, and using this method you can plan out your other limbs and the torso ahead of time more easily than (what personal experience has taught me) freewheeling lets you do.
Going back to childhood, I remember clearly the oft-repeated words of one of my uncles come Christmas time. Almost inevitably, one or more of my fellow grandchildren and I would get a gift requiring assembly instructions, and just as inevitably we would dive into them heedless of any such trivial matters. There was play at hand, and we’re supposed to wait?
Of course, the results were sometimes less than the fun, and in those occasions, my uncle would remind us to “Read the instructions”, with insistent and monotone gravitas. Okay, so the mixed results we got in those circumstances probably warranted the warning, to some degree. On the other hand, in the arts, (and in learning software to boot) play is absolutely warranted. The more versatile the tool, the more enlightening play can be. Play, as a result of curiosity, is a key path to discovery.
Today’s example is a result of a little bit of just such goofing off: texture patterns. In CGI, the use of well-designed repeating patterns is key to creating believable renderings. Samples are often readily available online, but just as often the quality leaves one… wanting. There are a few ways around this. You can have the pattern repeat over the surface you’re rendering repeatedly, which can work for a few repetitions, but will often succumb to the pattern revealing itself in the render, not a desirable outcome. Another method is to simply increase the resolution of the pattern, but this too can do more harm than good, blurring and damaging the pixels in the process.
So what to do? Versions of Photoshop starting with CS5 have an alternative answer at hand. Once again, it’s the content-aware fill to the rescue. Here’s the setup: I have a photographed texture from an armadillo hide (A live one too. Let me get super close for the photo-op, but that’s another story) that currently resides at 1024 pixels square. Here’s what that looks like now:
Now I’m going to create a new file that’s twice the size at the same resolution so that it measures 2048 pixels square. Then I’ll go back to the 1024 px image and using the move tool, drag and drop that image into the new high-resolution new file.
Now flatten the layers and use the magic wand tool to select the white space surrounding the old texture and refine the selection edge using the option in the options bar by expanding the selection area ever so slightly. Then select edit>fill>content-aware fill. Now the white space if filled seamlessly with more armadillo-hide texture, but it’s not repetitive the way doubling or tripling up the smaller pattern over the surface would be. It’s covering four times the area but with no loss in resolution.
Now all that’s left is to choose your preferred method of making this new pattern repeatable. The easiest is to use a plug-in like PixPlant which I found off Adobe’s website, but there’s always the more traditional method of Filters>Other>Offset and then using the pattern stamp, the heal brush etc. to ensure a solid repeatable pattern.
Just so you know this works even on stuff sampled online quickly, here’s a wood pattern extrapolated from an original Googled sample only 425 x 319 pixels into a standard 1024 x 1024 repeating pattern.
Here’s the original:
And the resulting pattern. Note that in between, I edited the dark spot out on the right before running through the above procedure, and increased the saturation.
Lesson 1: When working in CGI, you now have the ability to increase resolution as an alternative solution to repeating the pattern excessively or blowing out the detail by increasing the resolution directly.
Lesson 2: Never forget the power of simply playing around.
This week, I have a subject that I have to constantly challenge myself with because it is so easy for me to slip back into the bad habit of becoming a slave to realism.
I needed a good logo for this blog, and I had a good concept sketched out on digital paper for just that purpose. I decided I’d use Maya to realize the end result, and so I went about the task of creating the model. Visually, anything called the Inkwell Distillery should have some element of fire involved, and so I modeled stylized ‘flames’ to fill that void, but the test renders just weren’t showing potential.
Opting to experiment with Maya fluids to generate fire (more realistically) I started getting some fantastic looking flames, but for some odd reason this caused the rendering process to shut down when all the other visual elements were brought online. This is immensely frustrating, but this is also where I started to lose focus on the end goal because…
I insisted that it should work and that I needed to find a solution to make it render because now the fire had to be realistic! Had to! I mean, look how cool that looks, right?
Forgetting that design isn’t about realism, but on sending a clear visual message is a big stumbling block for a lot of illustrators and artists starting out. Even knowing this myself for quite some time I still need to be reminded constantly of my goals. If I don’t, I’ll get lost in a detail, never to find my way out, like I was starting to do here.
So what ended up happening? Putting coolness (and realism) aside, I went back to the stylized version where I should have continued experimenting in the first place. And so I did end up finding a great solution involving a more complex shading network for the fire, but one that ended up rendering just fine and actually looking more like it belonged with all the other elements.
But the end message to take out of this experience is that while experimentation is excellent, you still have to keep it on the reservation, so to speak. Getting lost in the details, and especially getting lost in the pursuit of realism over design, is a good way to never produce a finished work. Or to produce something that looks like it came from two different universes.
This time I managed to shake myself out of it before getting discouraged of the project. Next time I’ll think better of it before digging that hole in the first place. Hopefully…