Announcement

Collapse
No announcement yet.

[Request] New Mapper Tips

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    [Request] New Mapper Tips

    Hey guys,

    I work in the 3D industry (film & television) and since UT99 I've wanted to give this mapping thing a go. The capabilities of the U3 engine have got me pumped to dive right in, so I've been saturating myself with as much technical mapping info as possible but I've been unable to find information regarding a few things.

    I'm keen to just start playing around, but I'd like at least a rough guideline as to engine recommended limits, so I'd like to throw a few questions to the experienced mappers.

    Hopefully my questions aren't too stupid!


    TECHNICAL QUESTIONS

    1. What is the scale, in unreal units, for a character (height and width)?

    2. Taking 'average' to mean roughly as big as a manta, what is the maximum recommended poly count for the 'average' static mesh? Obviously it depends heavily on the topology, but this question relates more to #3...

    3. Given the above estimate, what is the maximum recommended number of static meshes visible (still trying to understand zones and antiportals) at any one time?

    4. What is the recommended maximum texture size per mesh, per map type? How many map types are supported? (ie. spec, diff, amb, etc.)

    5. How important is striping (not sure what term is used in gaming - triangulation order) to performance within U3?

    6. Is it both more memory efficient and more performance efficient to build pattern-repetitive objects - say, a picket fence, for example - with the one static mesh repeated many times rather than one big mesh?

    7. Should I build redundent tesselation into large flat surfaces for better vertex lighting? Is there an alternative approach?

    8. Any other technical tips or potential pitfalls I should know about? HUGE topic, I know, and I'm not asking for a step-by-step, I'm just after some pointers to get me started.

    GAMEPLAY QUESTIONS

    Any general tips for developing a map that plays well? Right now I'm more interested in just learning the editor, and seeing how well my skills translate across to gaming, but it would be nice if all the work put in results in a playable map!

    Apologies for the wall of text, and I hope some of you can kindly offer some of your time to answer a question or two!

    xox

    #2
    One thing, I made a thing 128 units high and when I made a double jump and I holded crouch it was possible to get up there, after trying 3 times or something so it is hard but possible with 128 units.
    Also, I tried to make a 2048 x 2048 texture and it crashed while importing.

    Comment


      #3
      This should be in this subforum. Anyway, I've read that the character height is 160.

      Comment


        #4
        Originally posted by Simeon View Post
        This should be in this subforum. Anyway, I've read that the character height is 160.
        Yeah apologies, I didn't spot the correct forum until I'd posted. Maybe an admin can move this thread?

        Comment


          #5
          You are jumping in at a more difficult time, mapping for UT3 is more involved than UT2004.
          No such thing as a stupid question. A lot of the questions have no simple answer though.

          1. Depends on the exact character since their collision is what is really relevant to the engine and not so much the actual model mesh which varies a bit for each character type, but the engine scale is 1 unreal unit = 2cm and UT3 characters (assuming there has been no change before the release since I am using UE3 and not UT3 here) are about 6' (~184cm) so around 92 units tall in-game. Best is to insert a similar UT3 character into a map and measure it.
          Note that GoW is using a different scale than UT3 (1 uu = ~1.2cm), so their assets are of a different size. GoW characters are about 156 units tall.

          2. For triangle count, as examples (in the UE3 I have), Jethro (ironguard) is 6646, RedTeamMale2 is 6884, Goliath is 5902, Scorpion is 6194, etc., while the various staticmesh objects for floors, doors, walls, supports, etc. average around 300 triangles each with the larger pieces around 1000 triangles.
          This is how you should be designing your UE3 assets.
          Unlike UE2.5, UE3 staticmeshes should be split by section (texture) and optimized as best as possible. Fine detail can and should be done through Normal Maps.

          3. UE3 uses automatic occlusion. There are no more Zone Portals and AntiPortals, those are UE2/UE2.5. See my post in this thread.
          The number of "visible" staticmeshes depends on a lot of factors. When mapping you usually develop a map by creating the layout first and then decorate it, so it is usually pretty easy to tell when to stop adding more mesh detail. Most current game systems can push about 250,000 triangles of staticmesh in the frustum (on average along with possible other geometry like some BSP, possible terrain, effects, etc.). Whether your map design can do this depends on a lot of things such as texture size, material design, use of effects, etc.

          4. Texture size should be chosen specifically for each staticmesh and its place in the world. There is no set guideline that states "doors get this, walls get that", other than you wouldn't do things like create a 2048x2048 skin texture for a small mesh that is never seen up close. A lot will depend on your map design in general, and how many unique objects are in it, as there is also a limited amount of texture memory on current video adapter hardware. So if your map had only 10 unique mesh objects in its entirety, you could get by with 2048x2048 textures for all of them. If your map had 300 unique mesh objects, you will have to design the textures appropriately.

          5. Clarification please.
          Staticmeshes should be created preferably as specific single objects with one skin texture and welded vertices, with simplified collision model.

          6. You usually don't want to create a massive single object such as a large long fence as a single object. It would most likely be as a 128, 256 or 512 unit long chunk, eg. perhaps two large posts and 8 picket boards. The reasons why are many-fold:
          - As a single object it must cull as a single object, so if any poly is in view it all renders. By splitting it into pieces it can be culled by the pieces.
          - The quality of the texture and lightmap on each face will be less if it is a massive object, as the total number of faces must all fit into a single NxN texture such as 1024x1024 or 2048x2048.
          - Using a single bite-size mesh piece multiple times has the advantage of instancing, so it requires less memory to place a fence piece 50 times than to have one single fence object that is 50 times larger.

          7. Use lightmaps for objects that can utilize the better lighting system. Use vertex lighting on objects that are smaller, less important in the scene or are already triangle-intensive where the comparison of lightmap versus vertex isn't as obvious. Creating meshes that have heavy tesselation simply to provide better vertex lighting may offset the gains of having no lightmap texture use, and changing to a low-poly mesh with lightmap may be a better choice.

          8. Tips: Dissect the Epic stock maps to see what they are doing. Read UDN3 when it is all online. Practice and experiment and see what results you get.

          Gameplay: It is important if you want a map that gets played. FPS game modes have specific "rules" or guidelines to design, of which there is a lot of this on UDN2 and UnrealWiki. Also see Hourences book on the hows and whys of game design.

          Comment


            #6
            Thanks for all the answers, DGUnreal! Very helpful! I'm looking forward to playing around as this is the first time that realtime engines have really started to come close to the level of detail we're producing here at work.

            Re: Question 6

            We call it striping here, not sure what it is elsewhere, but it refers to the way you tesselate geometry into tri's.

            For us:

            |\|\|\|\|\|\| is better than |\|/|\|/|\|/|\|

            Not in any major way, but enough to help with memory issues if we're pushing the limits of the hardware.

            I assume something similar applies in realtime engines, and if so, does it apply here?

            Comment


              #7
              bookmarking thread.. some very good questions and answers here. Thanks guys!

              Comment


                #8
                Originally posted by faultymoose View Post
                Thanks for all the answers, DGUnreal!
                You're welcome.

                5. Ok, that's what I figured you meant, triangle strips, I just wanted to be sure before I gave an answer that might otherwise be the wrong information.

                UE3 is using TriLists instead of TriStrips since that simplifies the data sharing with the collision system, etc. Some of the other Epic games/engines used TriStrips in some engine render areas, but AFAIK it is not implemented in UE3 for any assets at this time.

                However, meshes should be welded/shared vertices between adjacent triangles for performance with the vertex cache.

                That being said, depending on where the face is located on the mesh, the edge should be oriented appropriately so that the quad is convex or concave according to the shape, the orientation should be correct for backface culling as you don't want to use two-sided materials on everything, and also so that smoothing looks proper if the triangles are in the same smooth group.

                They should preferably be one texture per mesh as otherwise the engine splits them based on texture into individual objects for render. For some meshes this can be difficult to manage if it doesn't look good skinned or if it is difficult to work with as multiple single-texture mesh pieces in the editor, so the Max style Multi/Sub-Object mapping style is supported. I performed benchmarks with UE2.5 regarding single versus multi-texture meshes and I can expound on that if you need. But in most cases, since you are often using lightmaps, it is often easier to utilize the same UVW unwrap for the baked textures (diffuse, ambient occlusion and normal map) and the lightmap channel.

                Originally posted by kisk View Post
                bookmarking thread.. some very good questions and answers here. Thanks guys!
                I plan on putting a StaticMesh Workflow page onto UDN3 which will have a lot of this info in it.

                Comment


                  #9
                  .......................(no comprehende)...........don't worry about me, i've only mapped for 2 years on unrealed 2.5

                  Comment


                    #10
                    Wow, way more info than I was expecting, and certainly incredibly helpful! Thanks again DGUnreal!

                    It's nice to know I don't need to focus too much on parallel striping and can instead tesselate according to curvature. Striping annoys the **** out of me ><

                    Utilising one UV set per object shouldn't be a major issue. Do you know what map types are supported? Is the shading engine node-based, and if so, does it give you full control over custom map types?

                    Also, what kind of lighting capabilities are built into UT? (Very broad question, so again just after a rough guideline!) If I wished to generate a full-level AO pass (and perhaps a surface-to-surface GI pass), is this calculated automatically by UTEd or should I be prebaking lightmaps in Maya (Renderman)? Also, should I be aiming to build a map with predominantly static lighting, or is there good support for dynamic lighting? I imagine dynamic lighting limits the use of shadows from static geometry?

                    I know all this information will slowly become available as the developer sites come online, and the game is distributed across the globe, but I'm an obsessive pre-planner

                    Comment


                      #11
                      awesome info..

                      btw, for dgunreal, are those poly counts taken from the ut3 models.. theres got to be variations fo them if so because thats a really low poly count, take for example silent hill 2, which was released ages ago for ps2, its main character has over 12,000 polys.., perhaps the game has different mesh versions based on settings?

                      Comment


                        #12
                        It depends on overall visibility I guess. A game like Silent Hill, with relatively low detail geometry and few on-screen characters can get away with higher rez geo for certain elements.

                        Comment


                          #13
                          Originally posted by faultymoose View Post
                          Utilising one UV set per object shouldn't be a major issue. Do you know what map types are supported? Is the shading engine node-based, and if so, does it give you full control over custom map types?

                          Also, what kind of lighting capabilities are built into UT? (Very broad question, so again just after a rough guideline!) If I wished to generate a full-level AO pass (and perhaps a surface-to-surface GI pass), is this calculated automatically by UTEd or should I be prebaking lightmaps in Maya (Renderman)? Also, should I be aiming to build a map with predominantly static lighting, or is there good support for dynamic lighting? I imagine dynamic lighting limits the use of shadows from static geometry?
                          You can have multiple UV Channels per staticmesh.
                          Depending on the staticmesh I am creating, it may use UV1 for the texture material(s) and UV2 for the Lightmap. The Lightmap channel can be set in the editor.

                          For Multi/Sub-Object material mapped staticmeshes, I usually use the Box, Cylinder, Planar, Face, Sphere and/or Shrinkwrap mapping types on the various elements, collapse and combine them and export.
                          For single texture skinned staticmeshes, I create a flattened unwrap.

                          There are a number of available lights that are similar to most 3D applications. DirectionalLight, PointLight, SkyLight, SpotLight. And the lighting system has a pile of properties, such as shadow type, use of Volumes, light falloff, LightFunction which uses any Emissive material as an effect or projection, light exclusions, etc.
                          Lighting is normally created by a traced build in the editor that creates Lightmaps for the CSG Surfaces (Constructive Solid Geometry), Terrain, and those StaticMeshes that have been set to have Lightmaps. Lightmap size can be specified for all of these.
                          Lighting is extremely flexible and powerful, and it would be too long to cover it all here, so I recommend waiting for UDN3.

                          I don't pre-bake shadows/lighting into mesh textures simply because it is easier and more flexible to allow the engine to do that, since you will often use the same mesh in various locations in the map so the actual lightmap will vary. Back in UT2004 I created lightmaps that were scene-matched for meshes, but that is a lot of extra work, and UT2004 only supported vertex lighting on meshes so baking looked way better.
                          However, for many meshes it does look better to also create an Ambient Occlusion texture along with the Diffuse, and then burn that into the Diffuse in PhotoShop/PhotoPaint, and use that final texture as the skin in-game. Then rely on the lighting build to create the actual lightmap.

                          You should use mostly static lighting. CSG and Terrain default to having lightmaps created on build and you can specify the size. StaticMeshes require setting a map channel that points to the unwrap you created, and the lightmap size.
                          Dynamic lighting uses stencil buffers, is more expensive performance-wise, and often doesn't look quite as good.


                          Originally posted by Elis View Post
                          awesome info..

                          btw, for dgunreal, are those poly counts taken from the ut3 models.. theres got to be variations fo them if so because thats a really low poly count, take for example silent hill 2, which was released ages ago for ps2, its main character has over 12,000 polys.., perhaps the game has different mesh versions based on settings?
                          Those poly counts are from the Licensee Example Game which includes a bunch of Development/GoW/UT3 content. The shipping version of UT3 may be different. However, with the use of NormalMaps, you can get by with lower polycounts and have a rendered mesh that looks significantly better than polys alone.
                          For an example, look at the Unreal Tech page about 1/4 down under the Distributed Normal Map heading, where they show a 2M poly source detail model and the final in-game 5287 poly render model.

                          One of the things that most people don't understand yet, is that the next gen systems actually allow and often require the use of lower poly objects and textures. Next gen doesn't mean that objects can be 100k poly each and textures can be 4096x4096 each. This size of asset will never be pushed in-game with any amount of performance. The use of NormalMaps and Shaders allow for smaller textures actually providing better visual quality, and the use of Detail models for NormalMap creation allow low-poly objects of 5k to 10k look as if they were up to 2M polys.

                          Since the overall level of detail is increasing, the actual per-object values are still about the same as UT2004 regarding mesh polycount and texture size. The next gen engines just let us push more of it at once for greater detail, and the use of things like NormalMaps allows the low poly objects to look significantly more detailed.

                          Comment


                            #14
                            quick question, what is the unit of measure in UnrealED?
                            ie meters, centimeters, etc

                            Comment


                              #15
                              It is in Unreal Units.

                              In UT3, 1 UU = 2cm.

                              Comment

                              Working...
                              X