Announcement

Collapse
No announcement yet.

Polybumping, outline shadows, and Ultra Detail Textures

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Polybumping, outline shadows, and Ultra Detail Textures

    Mods, delete this thread.

    #2
    Re: Polybumping, outline shadows, and Ultra Detail Textures

    Originally posted by Axxron_HelmDeep
    1) I have this Uber cool idea to make almost every other color polygon a staticmesh, that way I can emulate damage modeling (By swaping out X to Y) and increasing FPS by who knows how much! I guess you could call it polybumping (almost)
    That could be interesting for wall pieces and such. Would really bloat the size of your mesh pack though.
    2) Outline Shadows. The computer only renders passes from an outline of a shadow, and proccedes to fill in the stencil. Big improvement in FPS.
    Are you referring to Carmack's Reverse or something like it?
    3) And then, SuperDetail Textures! As things get farther away they lower in detail, resolution and AF&AA. You know, I had developed these Ideas all for my mod (AssaultFlightSimulator) which got hit by a virus .
    MIP-Maps? Macro Texture? What are you talking about?

    Comment


      #3
      Re: Polybumping, outline shadows, and Ultra Detail Textures

      I have this Uber cool idea to make almost every other color polygon a staticmesh, that way I can emulate damage modeling (By swaping out X to Y) and increasing FPS by who knows how much! I guess you could call it polybumping (almost)
      Doing that would entirely defeat the technical principle that makes static meshes so fast: That's the fact that they're a whole large bunch of vertices (or polygons, if you will) that are cached in video memory.

      It takes only a single command from the CPU to the video card to draw an entire static mesh comprising of hundreds or thousands of individual polygons at a given location; that's what makes them so fast to render. Splitting models into individual polygons and having the CPU send drawing instructions for each and every of them to the video card would be basically what game engines have done all the time before static meshes were introduced; so what you're suggesting is a huge step back, not forward.

      Outline Shadows. The computer only renders passes from an outline of a shadow, and proccedes to fill in the stencil. Big improvement in FPS.
      And how would the outline be drawn without processing every single polygon from a 3D object?

      Comment


        #4
        Mods, delete this thread.

        Comment


          #5
          Originally posted by Axxron_HelmDeep ok, think of real shadows. Think of how the light on your hand dosen't go through your hand (read, cells) and changes and makes it dark. This is what computers do (WAAAYYYY to much proscessing). This is what I was thinking............... it would draw an outline of you hand, paste that behind you hand (in the direction of the light) and fill it in with a shadow that doesn't render off the polys.[/B]
          That's exactly how stencil shadows work. They find the "edge" of the object from the perspective of the light and project that edge into space behind the object. This technique has been shown off in OpenGL demos and other places for years, but isn't applicable to games because of a couple issues. Carmack's Reverse resolves those issues and makes stencil shadows usable in real dynamic applications, most importantly games.

          Comment


            #6
            Mods, delete this thread.

            Comment


              #7
              I'm sorry, I don't see the purpose of your post. Are you asking for help or are you just boasting?

              Comment


                #8
                ...

                Wow; my old login works! Yay for me...

                Now, the reason for logging in... (I read this forum often, but usually post and contribute in the Chimeric coding forum over at BU) is that I came across this post here... and frankly, I really have to say something about this:

                #1 Static meshes are _static_. I have a very HL2 style system for my own game-in-progress, but swapping out meshes is limited, esspecially with no skin control on these meshes. Thus your uber-damage modelling is either a) possible, with swapping, if you have an exponentially high set of meshes planned for all eventual damage cases or b) as mentioned above, the opposite to what a static mesh needs to be. Also, static meshes are inheretly STATIC; you can make em dynamic, but to have every single mesh in the game update itself per tick is slightly slower than the normal, high-detail, level geo additions.

                As to swapping into karma textures, you lost me there... you can have a karma mesh, aka my barrel, sitting there not using any extra cpu power until its shot, at which point it goes flying in cool physics, lands, settles, and goes back to a dorment - like state. This works, this just uses a 'dynamic' static mesh with karma setup and some key variables checked (and code support for the dynamic qualities, but thats another story). I have no idea what you are meaning to add otherwise.

                (also, if you did break appart a mesh on the per-poly level, it would look like a glass frame of the mesh shattering; as there is nothing inside, no code to make the 'polies that form the back side of the damage / crater. And though something like this is theoretically possible (*cough* redfaction 2), its not supported by the inheritly static nature of static meshes. (nothing per way of: dynamic vertex creation, movement, and poly creation, movement)).

                [ I have removed the last half of this post as the technlogical standpoint is better explained in the second post, with a less insulting attitude. I am not censoring anything (all relavent topics are re-iterated blow) I am just re-wording them in such a way that it gives the benifit of the doupt rather than starting a flame war. And I still feel that unless something is added to this thread to give credit to the author (even a careful explanation of what each topic means) that the 'feel like' a dumbass statement _looks_ dicededly like an understatement. ]

                Comment


                  #9
                  Mods, delete this thread.

                  Comment


                    #10
                    Clarity

                    This is not a Flame, nor quite an apology, but what I would consider a valid (and more level headed) addition to this discussion. Please read the whole document in order to better understand the reasoning behind the former and current posts in this thread.

                    Perhaps is you had explained the material better, and clearer, I may have given you more credit; however, I do not suffer fools lightly and without any other outlet of yourself to prove otherwise, that is how this was understood.

                    Had you perhaps posted the initial question with the same tone as your reply here, my response would have been different, but as it stands, this is how I view what has been written:

                    PolyBumping: "Swapping out every other color polygon": this implies everything from static mesh to BSP; though I assume not players and the like for the sake of animation. I maintain that the engine does not support, nor could support, a damage model system for static meshes in the method you have put foreword: and my above post iterates the reasons therefore. As for replacing BSP; ie having the engine level made out of static mesh elements, you have a few issues to consider: Again, the damage modeling. You could swap out chunks, but it would either be very indistinct or in the requiring of many hundreds of different models, OR very modular level design. As for the general idea of using static polygons instead of bsp, you have the 'ease of design' issues, setting up everything out of individual meshes, and you have the larger issue of dead lighting. (Static meshes lighting blows). However, should lighting not be an issue and you have all the meshes required, breaking a level of 1200 bsp polies (not visible, but total) into 600 (every 'other') 'dynamic' static meshes would not be an fps boost.

                    (You would also loose control over zones).

                    Also, with antiportals and zones (and just plain not looking) you gain a fair bonus in speed for objects that are not visible not being rendered. This also leads me to think that the system may loose efficiency rather than gain it. This is just speculation, but you don't add support in one way or the other. _and_, how would you break apart a static mesh dynamically (assuming all _one_ mesh), unless you have re-written the fundamental nature of the static mesh in the first place? Though you claim to have PolyBumping working, I feel that such an accomplishment should be stated as the like, not left to the reader's assumption.

                    Using 'dynamic' static meshes is quite possible, works quite well, and is a valid tactic; but replacing every other color polygon with them seems grossly illogical. Explain this in better detail if you will, but as it reads now I feel the scale of the tech is going to result in a massive loss of quality for very little gain in either gameplay or game speed.

                    Shadows: This is inexcusable. NoobHood is nothing to be ashamed of (we were all there) and there is always some topic which we know nothing about or are the local noob in. (I have absolutely no skill or knowledge about the production of music for example.). However, usually in such a case is where one is quiet, perhaps does a little research: if it were possible to render a dynamic outline of a mesh without any relative performance hit, and then stencil it onto polygons dynamically, (in some method not used currently in any engine) you would really have to wonder WHY doom 3 takes uses so much clock speed on rendering those visuals.

                    superDetail Textures. Rendering levels at different resolutions is a bit far fetched, so I will assume you do _not_ mean that, and thus say that I went to fare in my above post. However, for the _rest_ of this system, you get:

                    Rendered at a lower resolution. This is called mipmapping, is very standard, and has _many_ different levels of use. Another one of those things that millions of people don't know about, but you should really look into the shear basics of rendering (even texture importing) before you go about stating the standard system as a revolutionary new idea. As for use on 'detail textures', that really changes nothing, or less even, as the detail texture itself is a secondary addition to the polygon, and already optimized, though still optional, for the cost of using such a feature.

                    As for dynamic levels of AA and AF. This too... I have no interest in joining the NVidia design team, or working with the low level programming used to create the fundamentals of AA and AF. However, a quick search at google will explain what both are, and how both work. AF may be possible to have different levels based on distance and occlusion, but I would assume the basic principle of mipmapping to create as good a boost in this as any dynamic filter, which would probably slow the system down more than speed it up. AA on the other hand, is basically a way of blurring harsh contrast lines; this typically happens where something overlaps something else (concrete edge over sky = more contrast than inch one of concrete next to inch 2). Thus if relative distance breaks provide the basis for most of the AA required contrast, I would assume it’s impossible to use AA on a distance filter in the first place. ALSO, AA is not a polygon tool, but a pixel level filter, run by the video card. Though AF is poly render-ing related (though 100% inaccessible from uscript, 99.9% inaccessible from the c++ headers unless you are _really really_ good [I wouldn't have the skill to mess there myself, so no offense meant saying the same about you]) AA is not, and thus the above system is both impossible, and would have been remedied with a very quick search at google, or perhaps a read of the layman’s explanation for NVidias "Quincunx".

                    If you perhaps mean using lower detail textures of objects which are farther away, though this may work nicely, I would rather trust in the rendering improvement of mipmapping standard and not bother with the distance determining code that would likely be slower than the relative size of the texel in the first place.

                    Moral of the story: I don't suffer fools lightly, and by, for lack of a better term, boasting about systems which are either in existence or flat out impossible (both determinable with a tiny little bit of effort) you looked very much the fool. Thus, I will not apologize to you for making my post, but I will admit that my tone within the said post could have been more appropriate [I will edit it if you wish]. As for apologizing to the forum readers, I do read this forum a lot, but silently, and hate to look the public antagonist.

                    Nextime, think very clearly of what you mean, post it in as close to that level of relative clarity as possible, and do some background research. The PolyBumping tactic on its own would probably sound a lot better if you were clear on what objects would use it, and what your relative solutions for the lighting and zone issues are, if you consider those issue. (A celshaded game need not really consider light map lighting a priority, for example). The other topics though, posted in a forum such as this (you can't maintain credibility long talking about advanced topics (to others who know such topics) without any of the knowledge to back up the words), incited me to even want to go through the pain of registering in order to respond (though with a working login of old, that wasn't necessary).

                    Just so you know I would not have posted in this post (though I would probably feel much the same) except for the one line which sealed the deal: "ROLF inio, you make me fell like a dumbass". Not only does such spelling in such a line make your text-based projection or yourself look _really_ bad, but such an 'oops' statement proves that there is a lot of ignorance behind the words, and very little substance.

                    I don't know you, and thus don't like or dislike you; I am just reacting to what I feel is a disgrace (in its current form) to an otherwise clean and friendly source of knowledge. I thank you for not turning this into a flame war, and instead, ask you to fix this post! There is a glimmer of potential in the PolyBumping idea, though I really don't think we know what you mean in the least. As for the other posts, _do_ research, find out why something may or may not have worked, and explain in detail what you meant with the textures.

                    On that note, read this sticky; it provides a little insight into my sentiment regarding your initial post:

                    http://forums.beyondunreal.com/showthread.php?t=109192

                    Comment


                      #11
                      Thanks vlosk, but I think I want the mods to delete this thread until I can further test these ideas and really write what I mean.

                      Comment

                      Working...
                      X