Announcement

Collapse
No announcement yet.

Shader Distortion Real-Time Translucency Rendering & Post-Process Effects

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Shader Distortion Real-Time Translucency Rendering & Post-Process Effects

    Hi there,

    So I was watching this video here:
    http://www.youtube.com/watch?v=yA0LVnQ4ls8&hd=1

    And after watching it, I have a few questions:
    1) How does the developer accomplish that bottom-to-top "rendering" effect @ 0:08 to 0:11? And how does he manage to make the object's opacity trail behind the rendering special effect so closely?
    2) How does the developer accomplish the "distortion and the real time "material transition" from 0:13 to 0:22? Especially as it gets closer to the force field?

    Secondly, I was watching this other movie created by the same guy:
    http://www.youtube.com/watch?v=2X1vFVGAlwk&hd=1

    Now the guy explains in very vague terms how he does this, but I'm looking perhaps for more of a concrete explanation. And / or perhaps an image snapshot of an example shader network that would roughly accomplish what he's doing, if possible?

    And lastly, if you look at this video:
    http://www.youtube.com/watch?v=juB-PHKRG9s&hd=1

    How did this artist do:
    1) The Parallax Occlusion shader? (Linear + Binary raymarching) -- Inline HLSL + material nodes? First off, what is "Inline HLSL" and what does it mean in the context of UDK's material editor? And what is Linear & Binary Raymarching? Anyone understand these concepts or how to implement them, or know of a good website that explains the concepts in great detail?
    2) Post-Process: Electrical Interference (FrameBuffer Distortion) -- What is "frame buffer distortion" and how is that controlled through the material editor?
    3) Highlight Effect -- It says here that it uses material and post-process nodes. Is there a "kismet" editor for post-process chains as well? And if so, how would he create the material to get applied to a post-processing effect?
    4) Electrical Field vision -- Same here... how does he accomplish this sort of desaturated, inversed color? Seems like in the material editor, he'd use a desaturation node, and then some kind of a blue color filter. But what all is there to making something like this? Especially with that blooming halo only around select objects, as opposed to everything? (Besides, wouldn't a desaturation node affect the whites as well? Or does he negate some of this with a lerp?)
    5) Post-process effect -- Thermal Vision: How would one accomplish this effect? I'm at a loss for how to create the posterized over-exposed vision here.
    6) Disoriented -- How would a multi-vision like this be created?

    The Youtube video gives vague indications for everything, but does not supply anything concrete. Does anyone know he would have set up the shader node networks for each? And understands what he's talking about when he refers to post-process nodes? (Again, referring to my earlier question of there being a kismet editor for post-process chains). And then how does he go about setting up each of the effects in a post-process volume?

    I would greatly appreciate peoples' insight into how stuff like this gets made.

    Thank you.

    #2
    Originally posted by ADayInForever View Post
    Secondly, I was watching this other movie created by the same guy:
    http://www.youtube.com/watch?v=2X1vFVGAlwk&hd=1
    That flowing water effect is awesome isn't it? I recognized the flowing water from a tutorial that I bookmarked last month...
    Animating Water Using Flow Maps - http://graphicsrunner.blogspot.com/2...flow-maps.html

    Comment


      #3
      Originally posted by Godling View Post
      That flowing water effect is awesome isn't it? I recognized the flowing water from a tutorial that I bookmarked last month...
      Animating Water Using Flow Maps - http://graphicsrunner.blogspot.com/2...flow-maps.html
      Thanks so much for the insight! Yes this is very helpful.

      Comment


        #4
        I think for the 3rd part of my query, micahpharoh's tutorial on post-process chains was more than helpful in pointing me in the right direction for electrical interference and for thermal vision. (Although if you are lurking on these forums and seeing this thread, micahpharoh, could you shed some light and maybe answer or provide some tutorials on how stuff like this might be accomplished? The tutorials I've seen thus far are pretty awesome.)

        I just wish I knew how this guy created the disoriented effect, highlight and the electrical interference Post-processing effects....

        Oh yeah, and how the first artist I mentioned in my prior post managed to "stretch" the ball mesh as it came closer to the force field. I know the mesh itself isn't being physically altered or anything. It's the material surface. So I think of something like destdepth or something... but I wouldn't even begin to know how it set it up to anything close to what this guy is doing. And even still the warping material behavior is very specific and localized, which makes everything all the more curiouser.

        Comment


          #5
          I guarantee he is using world position offset to do a large chunk of those effects. If you need some pointers, I may be able to help you....been a while since I messed with them.

          Comment


            #6
            managed to "stretch" the ball mesh
            vertex shaders can do this. i think like above post says via offsets.

            the UDK comes with a lava style vertex shader/material , while playing with this i got it to do some crazy things like expand. i assigned it to the player pawn for fun and as i moved about depending on the amount of light i was in it changed colour,animated ,expanded,contracted quite insane tbh.

            Comment


              #7
              3dimentia -- I'd like that very much. What do pointers do you recommend? Familiar with most stuff on the material editor, but worldpositionoffset, cameraoffset, depthbiasblend, destdepth and destcolor are one of the few things I don't know that well.

              In the holographic room demo, I want to know how he might start deforming the geometry SPECIFICALLY when it's close to touching a wall. I know part of the equation is triggers and kismet. But then... what's he doing? Is he using a lerp to switch worldpositionoffset on and off between states? And then, what specifically might he be doing to get the mesh in this very specific way... and at THAT particular side of the sphere only.. and then specifically show 2 different materials blending between each other at very context-specific values....i.e. depending on how close the viewer gets to the force field. How can he get kismet to determine how far away from the forcefield, and then set the constant variable to the lerp node(s) accordingly? Or is he doing something else? Is he using depthbiasedalpha by any chance to make the transitions very specific?

              And then there's the question about where the ball first spawns when everything is activated. The ball goes from opaque to transparent (probably using a black and white mask filter) with a panning node. But the thing is, it wouldn't make sense to use a straightforward vertical panning node. I know if I try something like a panning node across your average sphere diffuse/normal/specular map (either vertical or horizontal), the deformations will appear on random spots on the sphere because of how the UVs are set up. So the next question becomes how to pan uniformly across the mesh like that.

              kg777: If you still have your modified network... could I please see a snapshot of it, by any chance? I'd like to see how you changed it to do the odd deformations. Thank you kindly...

              Anyways, thanks to the both of you for the assistance and the suggestions!

              Comment


                #8
                Hi,

                I don`t know if I look at this from the wrong direction (Beginner Alert) but as said before...World Position might play a big role in this. There is some good video tutorial on youtube and udk-scriptures that shows you how to build a snow material that always aligns with the worlds z-axis (meaning: Snow always stays on top of the rock). Now I imagine it basicly could work like this=

                Each Holodeck wall is linked through Kismet to the material. Ball hits wall 0,1,0,0 and the node setup creates an effect that interpolates along the y-axis. The rest is good use of effects.

                I just don`t know how to make it work that the wall "detexts" how far the ball has moved into it.

                don`t know if I made sense here.

                Comment


                  #9
                  Originally posted by ADayInForever View Post
                  (Although if you are lurking on these forums and seeing this thread, micahpharoh, could you shed some light and maybe answer or provide some tutorials on how stuff like this might be accomplished? The tutorials I've seen thus far are pretty awesome.)
                  What exactly do you want to know?

                  The distortion shown at the 0:11 or so mark on the UDK Game Shaders video would actually be really easy to make by the look of it, but past that it gets a lot more complicated. A lot of those effects could be done in any number of ways. It could be a pure post process setup, or it could be a combination of any number of other things.

                  Comment


                    #10
                    Originally posted by SethNemo View Post
                    Hi,

                    I don`t know if I look at this from the wrong direction (Beginner Alert) but as said before...World Position might play a big role in this. There is some good video tutorial on youtube and udk-scriptures that shows you how to build a snow material that always aligns with the worlds z-axis (meaning: Snow always stays on top of the rock). Now I imagine it basicly could work like this=

                    Each Holodeck wall is linked through Kismet to the material. Ball hits wall 0,1,0,0 and the node setup creates an effect that interpolates along the y-axis. The rest is good use of effects.

                    I just don`t know how to make it work that the wall "detexts" how far the ball has moved into it.

                    don`t know if I made sense here.
                    You did. And that actually makes a lot of sense. It never occurred to me to think of using worldpositionoffset as a sort of coordinate rotation or translation function.

                    I'll have to play around with that. Thanks a lot for the explanation!

                    Comment


                      #11
                      Originally posted by ADayInForever View Post
                      You did. And that actually makes a lot of sense. It never occurred to me to think of using worldpositionoffset as a sort of coordinate rotation or translation function.

                      I'll have to play around with that. Thanks a lot for the explanation!
                      no problem. I`m suprised it helped. Let us know if you create something similar.

                      Comment


                        #12
                        Originally posted by micahpharoh View Post
                        What exactly do you want to know?

                        The distortion shown at the 0:11 or so mark on the UDK Game Shaders video would actually be really easy to make by the look of it, but past that it gets a lot more complicated. A lot of those effects could be done in any number of ways. It could be a pure post process setup, or it could be a combination of any number of other things.
                        Thanks for replying . I'll tell you specifically everything I want to know.

                        1) Parallax Occlusion Shader. I imagine he's using a bumpoffset node with his normal map, right? Is he using a worldpositionoffset to make the normalmap + bumpoffset appear more prominent based on the cameraworldposition? But I'm sure there has to be more to it than that. I'd like to know some shader network details about how he might set something like this up. Like if I were wanting to set a custom attribute in the material instance editor to determine how much bump shows up in a normal map, how might something like that work?

                        Also, I need a vocabulary explanation of what this guy's talking about, here. What does "Parallax Occlusion" mean? Occlusion suggests something is being "hidden" from view. In this particular case, parallax... the background moving slower than the foreground... and this is... somehow being occluded from view? And this technique of "Linear + Binary Raymarching". I have no idea what that is... and a google search isn't turning up much of anything of substance either. Who invented the concepts of raymarching? What sort of mathematical concepts does this sort of raymarching employ?

                        Also, what does the "inline" part of "Inline HLSL" mean? That this guy created a custom node full of HLSL code that works alongside the regular mat nodes that UDK supplies by default?

                        2) Electrical Field Distortion:
                        How is he offsetting the objects world space and still maintaining complete translucency? What would be a shader network for something like that? You showed something very briefly in your 2nd post-processing chain tutorial... around 15:20 or so. You called it a "depth-based distortion." Was that a pre-made UDK material, or was that a custom material that you used to begin with? What I'd like to know is what was in the shader network that you used to get the 'depth-based distortion' in that video. I think from there I can deduce how to do something like an Electrical Field Distortion. Note: I'm not wanting to copy these effects, I'm just curious how to use these so that I can brainstorm and come up with my own stuff that involves effects like this for my own projects.

                        And then there's the issue of "Framebuffer Distortion" -- what does "Framebuffer" mean? Is that some kind of a queue of frames that have not yet been rendered? I'm confused about what this term means, as well as the context in which it's used. (Not to mention any relevant math applied here.)

                        3) Surface Shader: River
                        I know how Fresnel effect has relevance based on distortion of the core intensity of the reflective surface, but I'm a little confused about what this person means by "absorption" -- is he referring to a depthbiasedalpha node? What really has me confused is how he's able to make a translucent material work hand-in-hand with a normal map, given that the two often don't get along with each other very well. Unless it's an opaque material, and he's using destcolor to give the illusion of a color to the river instead of an actual diffuse map?

                        4) Highlight Effect
                        Seems rather straightforward, you could probably adjust the shadow, mid-tone and highlight settings to get the values he's used... but then... that wouldn't explain how he blows out the shapes of specific objects with an emissive white light or how he sets the gradient effect. (I know this can be done with a red dominantdirectional light and a fog emitter... but can this be done with just material and post-process chains?). Does the material work such that it's given a white emissive if the object exists beyond a clamped value range? Or is there more to it than that? Or if I'm not thinking about this correctly, how might this material network be set up and then applied to a PP chain?

                        5) Electrical Field Vision
                        Based on your tutorial, he's using some kind of posterization effect... but only on specific objects below a certain light value. Really dark objects become more posterized, while lighter objects become less so. And then there's the issue of halos over extremely bright objects. Now if I were writing a shader network, I'd be inclined to put in an "if" node that checks whether the light value is above a certain range, and if so, multiply that value by an arbitrary value X to get the halo. But then he's also got a vertical grain applied over the objects above the certain light value as well, which sort of complicates my understanding of the effect. Especially when the vertical striping is always relatively vertical to the viewer despite the camera translation and rotation applied. How would he get a vertical striping only on the objects?

                        6) Thermal Vision
                        Again the posterization effect is applied here. But the highlight values are magnified and the posterization is preserved even at the higher values... although not nearly as much. And then there's the purple halo effect surrounding each of the figures that generates a heat signature. How does the shader tell the difference when called in post-processing? And how is this posterization effect created?

                        7) PostProcess Effect: Disoriented
                        This is the kicker. How is he creating the illusion of 3 people at once? I know he's probably using a sine wave to alternate between values of "more visually coherent" and "less visually coherent". But I'm confused as to how he's able to make multiple instances of the same object appear in the scene. How would this "blur" effect be created? And if worldpositionoffset is the relevant plug, then what value (or values) are changed and with what nodes to create the illusion of multiple object instances?

                        That's what I'd like to know...

                        Think you can help with that? Or do you need more information? Or...?

                        Comment


                          #13
                          What if the Disoriented effect is created with several SceneRenderTarget Nodes that are made to shift around somehow and the magic actually happens in the way they are combined/the way they interpolate? I got no idea how to swirl the images around except for standart panning and rotation. Maybe it is possible to link the Panning Node to some Params that define the Nodes directional values? If yes, than you could link this to Kismet and make it go in circles. If it works it would work for size and stuff like that to. Might be that this is just a workaround.

                          I never heard of Rymarching but I know what Parallax Occlusion is. UDN has a very helpful tutorial in its "Engine Gems". PO materials are quite nasty for performance but you can make these stone-walls from the vid.

                          Comment


                            #14
                            Originally posted by SethNemo View Post
                            What if the Disoriented effect is created with several SceneRenderTarget Nodes that are made to shift around somehow and the magic actually happens in the way they are combined/the way they interpolate? I got no idea how to swirl the images around except for standart panning and rotation. Maybe it is possible to link the Panning Node to some Params that define the Nodes directional values? If yes, than you could link this to Kismet and make it go in circles. If it works it would work for size and stuff like that to. Might be that this is just a workaround.

                            I never heard of Rymarching but I know what Parallax Occlusion is. UDN has a very helpful tutorial in its "Engine Gems". PO materials are quite nasty for performance but you can make these stone-walls from the vid.
                            Checked out PO... I've seen most of the gems, but I can't believe I missed that!

                            As for Distortion. SceneRenderTarget? Not familiar with that node... not finding it in the material editor of the actor classes list. If you're talking about a scenecapturereflectactor or something, then hmmm... maybe. Although I've seen a scenecapturereflectactor in action on a RenderTexture2D surface. But when I've tried it, there's always been an incredible amount of aliasing in the reflection. Just doesn't seem likely (Unless there's an option to use AA in scene capture actors that I don't know about?)

                            Other than that, I'm looking at this scenetexture and scenedepth nodes I found in the material editor, and doing a bit of background research on those nodes on UDN. It looks like this might answer a few of my questions....

                            Comment


                              #15
                              http://udn.epicgames.com/Three/MaterialsCompendium.html
                              This is a fantastic resource when you're trying something new in the material editor.

                              Check SceneTexture for the double vision. Also take a look at the Gem on Sobel Edge Detection to see how you can take the scene texture and mess with it a bit. I think it will do the trick.

                              I'm still working on some of the other things for you. You hurled a lot at me, and I can't answer it all, but I'll do what I can.

                              Comment

                              Working...
                              X