Using Framebuffers in mental ray

 

Framebuffers are the mental ray feature for doing multi-pass rendering. If you're familiar with the RenderMan terminology framebuffers in mental ray are similar to Arbitrary Output Variables (AOV). When talking about renderpasses its important to distinguish the term render-pass from render-layer: rendering multiple passes means "rendering multiple images concurrently with the same render command" while a render layer does separate rendering.
(Its possible to define more render layers in a single mi file but that launches multiple render commands. By the way in the mental ray terminology Multipass Rendering is another completely different feature...)


The most important difference between render layers and passes is that when rendering passes all of the mi file parsing, scene setup, geometry tesselation, texture read, light sampling and possibly some shading calculations are only done once and the components of this calculation are written into separate images. This is much more efficient than doing the same computations over and over again just to create the same geometry representation in memory and do slight shading changes (like rendering a diffuse or specular pass). In a complex scene the shading of a sample might take much less time than the geometrical processing that is required to subdivide and displace the geometry. It is more efficient to squeze out as much information in one go as you can than doing the same calculations again. The overhead of the extra shading might be negligible compared to the scene setup time.


The framebuffer workflow is quite simple:

You declare a variable type for each framebuffer that you'd like to store (usualy the type is scalar or color, either RGB or RGBA) write data into these framebuffers from your shaders while rendering and optionally do computations on (or with) them in output shaders. Thats it.

User and default framebuffers:

User framebuffers are empty and can be filled with any data you like. The default framebuffers on the other hand are filled automatically by mental ray and its up to you whether you'd like to write them on disk as an image or not. Enabling the default framebuffers does not slow down the rendering only the file writing takes extra time after the rendering is completed.

Default framebuffers are:
Depth: Distance from the camera as a floating point value.
Normal: Normal vector information.
Motion: Motion vectors when something animates and motion blur is enabled.
Label: Label (or tag) information. Requires user labels to be defined using the miLabel integer extra attribute on the transform or shape node of the objects in Maya.
Coverage: The coverage buffer stores how much objects with different labels occlude each other. The coverage buffer requires minimum sampling of min -1 max 0 to work.

 

Enabling a framebuffer:

Up to mental ray 3.3 (Maya 6) there were only 8 user framebuffers, from 3.4 (Maya6.5+) there are unlimited buffers available. The images are now stored on disk instead in memory. The location of the temporary FB files can be defined using the -fb_dir command line option, by default they are in the temp folder. After render-crashes files named like fb000.0.118960 can be found in the temp folder that are fb files that were left over by MR.

To enable a buffer you have three different options:

- Maya 7 has a default machanism to create buffers but plain and simple: that does not work properly. The buffer itself can be enabled in the render settings/mental ray/framebuffer/user framebuffer frame but only 8 is supported (mr3.3 legacy). In the camera's mental ray tab the output passes frame lets you define the passes for the camera (set data and file type) and theoretically the file mode of a pass can be used to write a buffer to disk. The problem is that you can't set which buffer to write to the file so it only works by trial and error. In reality this feature was written to process buffers by output shaders and not to write them and its not fully functional. (Even when it works you have to set full image paths for the images that is quite awkward when using network rendering.)

- When using the mental ray standalone version custom text can be used to enable buffers. Its straightforward, you can control everything but it requires a lot of manual editing or some scripting. To enable a framebuffer you have to add custom text in two places:

Custom options text (Render settings/mental ray/Custom entities/Custom scene text/Options Text) This creates the buffer but does not write it to disk. To enable fb 1 and use the RGBA data type (interpolated) write this into the text node:

frame buffer 1 "+rgba"

Custom cameras text (Render settings/mental ray/Custom entities/Custom scene text/Cameras Text):
To write the image from FB1 to the file framebufferTest.iff in iff format:

output "fb1" "iff" "framebufferTest.iff"

- Use a geometry shader to enable the framebuffer at rendertime. It is the most versatile option: compatible with every command line flag of mental ray (for example -file_name to change the rendered image name) and quite simple to manage from Maya's GUI. It can be imported into scenes, attach the object to render layers, have multiple objects in a scene with different passes defined.

You find such a shaders here:
FramebufferOutput
ctrl_buffers
shaders_p_3.0_maya

Interpolate:

Averaging information in a framebuffer. When disabled each pixel stores the value for the frontmost object and not the weighted average. The interpolation is enabled by using a "+" sign before the datatype (+RGB, +Z), using "-" means uninterpolated.

 

Writing data into a buffer:

Basically there are two different approaches to writing into framebuffers:

- Have a shader that does the shading and writes the data into multiple buffers on its own. This is a very simple solution for the user and does not require any shading-network building. One example of such a monolithic shader is Pavel Ledin (aka Puppet)'s megaTK material. The limitation of this method is that you need the source code of the shader if you'd like to add a new component to a buffer and even if you have the code it has to be recompiled every time you do such modification.

- The other approach is to have a shader that does not "shade" anything just samples the incoming connections and writes the
sampled data into specified buffers. By attaching this shader to the objects and connecting the components it can sample any shader without the need of modifying their code. Now this is quite an open solution but requires much more shading network building and you have to add at least one new shader for every material in the scene (that should write to buffers).
You find such a shader here.

This method can be simplified by using phenomenas to wrap the material shader and the buffer writer shader into a single node.
Editing this phenomena node does not require compilation, its easier to modify but does require the courage of using a text editor.

There are some things that are either very hard or pointless doing with this approach. For example putting different lights' illumination into different framebuffers can not be done with this technique, while a material could easily do this from inside the illuminance loop. And handling transparency is also way more complicated.

Using mi_fb_put from a material:

From the programming point of view its very easy to put data into a framebuffer: you can call the mi_fb_put command from the shader to write the information stored in a variable into a specified framebuffer. So for example to write the variable diffuseColor into user framebuffer number 2 you should use:

mi_fb_put( state, 2, (void*)&diffuseColor );

And to query the data of the current sample for adding / mixing the values you can use mi_fb_get.

Things to look out for...

Adaptive sampling:

Adaptive sampling is based on the primary framebuffer no matter how the object is shaded in the user framebuffers. Try to put the component with the most detail into the primary buffer to have the best sampling in all buffers or use fixed sampling (min = max) or rapid scanline if you need even more precision.

Framebuffers and the rasterizer (rapid scanline)

Be aware that transparent objects are handled completely differently with the rasterizer:
Transparency computations are not done in a depth based order and compositing of the samples is happening separately. Although the rendered image looks correct user framebuffers are not composited properly, they are simply filled using the output data of the frontmost shader.

Currently there is no easy way to composite the data, the shader that writes to the framebuffers has to manage all this. As a quick workaround one can use trace_refraction calls instead of trace_transparent but that requires raytracing (that is a bit of a limitation) and it makes transparent motion blurred objects "drag" the background (that can be a pain with highly transparent materials).

Maya shader glow:

Maya uses the first framebuffer (FB0) to store a mask for its shader glow. It does not care about the shaders' glow settings, if there is data in this framebuffer than applies glow to the image. This behaviour can be turned off by disabling the Export Post Effects feature in the Render settings/mental ray/Translation frame. Or execute setAttr "mentalrayGlobals.exportPostEffects" 0; from the script editor.

Null file type:

The null type is just a dummy file type: mental ray creates the framebuffer and the image on disk but deletes it when the rendering is completed. It can be used to define buffers that you'd like to use for computation but don't really need as an image.
(Write your own data in a buffer and read it from a different shader and stuff like this. For example a mask for an output shader can be rendered easily this way.)

 


And things that are now history but were a pain with MR 3.4 / Maya 7

By default the user framebuffers are not filtered up to MR 3.5. If the coverage framebuffer is enabled than the filtering
of the primary framebuffer (the rendered image) is used for all buffers.

User Framebuffers has some annoying issues when rapid scanline mode is used:
- The user framebuffers might be offset compared to the primary buffer, but they line up to each other.
- Framebuffers are not using the filtering specified they always use the box filter that looks really ugly with motion blur.