For a better reading experience, please go to my github to download the PDF version. It is strongly recommended to use the bookmark function of the PDF reader to view the table of contents, which is more convenient and quick to refer to.
HTTdesu’s personal station httdesu.github.io
This article is about 16,000 words (excluding code).
MME Reference Manual update log
0.1.0.0 (2010/9/18) first release
0.2.0.0 (2010/12/12) MME Ver0.20
・Added OFFSCREENRENDERTARGET semantics
・Expand the information that can be obtained by CONTROLOBJECT semantics
・Easy the restriction of rendering order of objects referenced by CONTROLOBJECT semantics
・Added EDGECOLOR semantics
・Fixed wrong description in VIEWPORTPIXELSIZE semantics
・Corrected some words
0.2.2.0 (2010/12/16) MME Ver0.22
・Changed the Miplevels setting method for RENDERCOLORTARGET and OFFSCREENRENDERTARGET semantics
0.2.3.0 (2010/12/20) MME Ver0.23
・Added a supplement to the semantics of CONTROLOBJECT
0.2.4.0 (2011/02/09) MME Ver0.24
・Added the special object name “self” that CONTROLOBJECT semantics can specify
・Added TEXTUREVALUE semantics
0.2.6.0 (2010/02/21) MME Ver0.26
・Corrected the description about the Draw=Geometry command
0.2.7.0 (2011/05/22) MME Ver0.27
・Added _INDEX semantics
・Added VertexCount variable and SubsetCount variable
・Added opadd variable
・Added the supplement of TEXTUREVALUE semantics
0.2.8.0 (2012/03/26) MME Ver0.28
・Modified part of the description about the semantics of CONTROLOBJECT
0.3.0.0 (2012/09/19) MME Ver0.30
・Added the special effect name “main_default” of DefaultEffect that OFFSCREENRENDERTARGET semantics can specify
0.3.3.0 (2013/02/13) MME Ver0.33
・Added material deformation semantics for textures (ADDINGTEXTURE, etc.)
・Added sub-Tex related content of PMX model (UseSphereMap, use_spheremap, use_subtexture)
・Added MATERIALTOONTEXTURE semantics
・Added GROUNDSHADOWCOLOR semantics
・Added MME_MIPMAP macro
Attention
・This document only describes the semantics and annotations that MMEffect can recognize. For a more detailed description of the effect file, please refer to the following link:
Effect file format:
Effect Format (Direct3D 9) – Win32 appsdocs.microsoft.com
HLSL Reference:
Reference for HLSL – Win32 appsdocs.microsoft.com
・The formulation of semantics and annotation scheme refers to NVIDIA’s SAS.
https://www.nvidia.com/en-us/drivers/using-sas/www.nvidia.com
However, there is no guarantee that effects files used with FX Composer will work in MME.
Foreword
(Note: This section is not from the original reference document, but written by the translator.)
MMD is developed based on DirectX 9, and the degree of freedom for rendering is very low. MME is a secondary development of MMD based on this, and exists in the form of plug-ins.
The greatest contribution of MME is to expose the closed rendering variables in MMD to users, so that users can control the details of rendering by writing effect files by themselves, and achieve their desired effects. This of course greatly improves the rendering performance of MMD, but at the same time, the use of MME is limited by DirectX 9 on the one hand, and by MMD itself on the other hand.
In order to use the features of DirectX 9, users must learn HLSL, High Level Shader Language. This is a shader language developed by Microsoft that is quite similar to the syntax of the C language. Users can use the features of Shader Model 3.0 (DirectX 9.0c) at most (Note: The specific version number of DirectX used by MMD is unknown, and Shader Model 3.0 may not be available. But Shader Model 2.0 is confirmed to be usable). Of course, we all know that this big version doesn’t have features like order-independent transparency, tessellation, geometry shaders, compute shaders, so don’t expect them.
In addition, MMD exposes very limited information to shaders through MME. Although it is generally more than enough for rendering, it is still very difficult to calculate some specific effects, such as global illumination, or trigger certain effects with collisions. Also, since MMD itself is not programmable, some effects can only be compromised.
However, mastering the method of writing MME effect files is still a very effective way to learn rendering technology. It is precisely because it has many restrictions while providing a certain degree of freedom, it will give learners the motivation to find information to overcome these restrictions, so as to study rendering technology more deeply.
Before reading this document, readers should have HLSL foundation and some computer graphics foundation. This document does not explain most of the technical terms and rendering principles. It is recommended that readers who have no foundation read Section 6 first. Due to Zhihu restrictions, HLSL cannot be highlighted.
Thanks to @Sakuya Ametsuru for his help with the translation.
technique and pass
Composition
The effect file is composed of technique and pass according to the hierarchical relationship, like the following��More than 120 million pixels, and the number of fragment shader calls is scary. In order to achieve this performance requirement, the architecture of the GPU is parallel, that is, hundreds or thousands of units are rendering different pixels at every moment. Under this architecture, when the user wants to access the value of a neighboring pixel, it is very likely that this pixel has not yet been calculated. This creates some obvious pitfalls.
For example, in real life, when we observe the light, we will find a halo around the light. Here we do not discuss the principle of its generation, but only discuss its imaging characteristics. This halo appears around a bright light source and gradually decreases in brightness as it moves away from the light source until it disappears. If you want to achieve this effect through the fragment shader, it is obviously not possible, because each pixel cannot access the surrounding pixels when it is rendered, that is, it cannot know the distance between itself and the light source.
In this case, a technique called post-processing is generally used to achieve it. Post-processing, as the name suggests, is the processing performed after the first pass of rendering. Save the rendering result of the first pass on a texture, and pass it as input to the rendering pipeline for the second pass rendering, which is post-processing. Since in the second rendering, the result of the first rendering has been fixed in the form of texture, each pixel can obtain the value of surrounding pixels by accessing the texture.
It should be noted that the algorithms used in post-processing are generally not considered as rendering techniques, but as image processing techniques, that is, these algorithms are usually explained in detail by computer vision courses.
Implementation ideas for some common effects
Shadow
We all know that the GPU cannot directly calculate the light path (otherwise ray tracing would have been implemented in the game), but dynamic shadows can be seen everywhere in the game. The general way to calculate dynamic shadows is to sample a depth map from the light source. Readers can understand it as: emit rays from the light source to the surroundings, and record the distance between the object and the light source that each ray hits for the first time. When rendering, by comparing the distance between the current point and the light source and the value of the depth map of the light source in this direction, you can know whether the current point is the first object that the light emitted from the light source hits, and you can also know the current point. Whether the point is in the shadow of the light source.
Obviously, this approach is ineffective for light-transmitting occluding objects. In addition, limited by the accuracy and size of the depth map, places far from the light source are prone to jagged shadows.
Floodlight
Bloom is the halo around the light source mentioned above. The general implementation method is to clamp the brightness of the rendering result in the post-processing process, find out those brighter pixels, and then perform Gaussian blur on them to achieve the halo range. Finally, the blurred result is superimposed on the original rendered result.
Self-illumination
The self-illuminating material does not really emit light, it just makes it appear to be glowing through the floodlight.
Line draft
A common practice is to calculate the first-order partial derivative of the depth buffer and rendering results pixel by pixel, filter out those pixels with large changes as edges, and then perform a weighted average of the results of the two to consider whether this pixel is an edge. Of course, this algorithm doesn’t work very well.
Another slightly more complicated method is to first calculate the grayscale of the rendering result, then calculate the gradient and direction of the grayscale image pixel by pixel, and filter the gradient. The image after this step of processing also needs to be suppressed by non-maximum value, that is, according to the gradient direction to query the largest gradient value nearby, and discard those local suboptimal values to obtain accurate edges.
ripple
Ripple can generally be considered as a sine wave. The expression of the sine wave is A*sin(2π/L*t+φ0), that is, the amplitude, wavelength and initial phase can be used to control the sine wave. As t changes, the value of each point on the plane will also change. We can simply apply this value to the uv change to achieve a similar looking ripple. If you want to keep improving, you also need to consider the impact of energy attenuation on the amplitude and use the Fresnel equation to actually calculate the offset of uv.
Ocean surface simulation
Simple ocean surfaces can be made using a random superposition of plural water waves. However, if you want to be real, go learn fluid mechanics and Fourier transform.
Diffuse circle
Circle of blur refers to a circular blur that occurs when objects are not projected exactly on the film. In order to simulate this effect, the distance of the points rendered by each pixel can be used to calculate the size of the circle of confusion for the current pixel, and then the color is distributed in all pixels covered by the circle of confusion. When all pixels have been rendered, the cumulative result is quite realistic.
Of course, if this is really implemented, the efficiency will be very low. Another tricky way is to Gaussian blur the normal rendering and then use the depth buffer to linearly interpolate between the clear and blurred images to simulate this effect.
You can also call this effect depth of field.
Mirror
Re-render the scene from a position symmetrical to the mirror.
The normal rendering results in a Gaussian blur, and then uses the depth buffer to linearly interpolate between the sharp and blurred images to simulate this effect.
You can also call this effect depth of field.
Mirror
Re-render the scene from a position symmetrical to the mirror.