Results 1 to 28 of 28
  1. #1
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default Nvidia Graphics cards, what is best for unreal?

    So iv been looking around online at new hardware and my eye got drawn towards 3 different cads from Nvidia and i thought i would share what i have learnt from my research.

    The Graphics Cards in question:
    Nvidia GTX 690 (Processional Gaming graphics card (3072 CUDA cores))
    Nvidia Quadro 6000 (Professional Workstation Graphics card (448 CUDA cores))
    Nvidia Tesla C2075 (Processional Workstation Graphic card (448 Cuda Cores))

    Whats the difference?
    The main difference between each GPU is the fundamental differences in the architecture,
    Example is that the GTX 690 uses Keplar and the Quadro uses the Fermi.
    these 2 separate Graphics architectures from Nvidia both utilise the CUDA cores differently, the way upon which these 2 different forms of architectures use there CUDA cores are extremly complicated, mostly parallel vs adaptive processing.
    The 3rd GPU known as the Tesla is a Dark horse GPU, rather then specialise in overall GRAPHICAL power is takes strain away from the CPU and renders/ processes everything.
    using a single tesla GPU is the equivalent of running a 120core CPU, this is down to its use of optimising the 448 CUDA cores to process everything from simple processes all the way up to OTT graphics used to simulate weather systems.

    What does this mean to Unreal?
    Unreal software has the ability to utilise Nvidia features such as APEX and PhysX, making games feel richer and more realistic, utilising the incredible CUDA technology.
    here is a small demo of APEX cloth from Nvidia with the unreal engine in action:


    The Conclusion:
    Although every graphics card have there strong and weak points, ALL modern technology from Nvidia are proving more then useful in the Game and industrial and Medical world, creating real world applications Saving lives, money and entertainment.

    overall my opinion and advice is strictly dependent on your overall budget for a graphics card, although the Gaming graphics card will work overall best for creating games, a Tesla will allow you to create content allot faster and give you more performance for testing, however the Quadro GPU's will allow you to create content ALLOT faster compared to both the Tesla and gaming standard graphics card.
    However it the Quadro falls during testing games due to its architecture, the over all in game optimisation for high graphics setting cause poor frame rates and give a lower feel to the end result, the Tesla has a the same but less of the same issue due to its Keplar GPU, but allows for better creation time and Light mass rendering due to its dedicated GPU computing system which utilised the Graphics processor for CPU based event.
    Although a Gaming Graphics card will be the overall better bet because of its much larger amount of CUDA cores and gaming based architecture, you will see a slight decrease in production time from content creation, light-mass calculation and overall render times.

    However for best overall performance from a Nvidia Tesla GPU it is advised to Sister it with another graphics card, this can include either a GTX 600 series GPU or a Quadro series GPU, here is a example of the Quadro series Graphics cards sistered with a Tesla GPU: http://www.nvidia.com/object/quadro-3ds-max.html

    The Results
    in simple terms the the easiest guide for choosing a graphics card falls under budget margins.
    $0-$1000 = Gaming Graphics card (gtx 640-690)
    $1000-2000 = Tesla Graphics Card twined with Gaming Graphics card (Tesla c2075+ gtx 690), Duel GTX 690's
    $2000+ = Quadro Graphics cards (Quadro 5000+), Quadro Graphics card Twined with Tesla Graphics card

    thank you for reading, please feel free to add your opinions
    Last edited by pixxie_payne; 08-06-2012 at 11:37 PM.

  2. #2

    Default

    It seems you're misunderstanding the use of those cards. The Tesla and Quadro cards aren't meant for gaming at all, they will suck hardcore for games. They're meant for use with programs like 3ds Max, Maya, Zbrush, Mudbox, etc.

    Even if you're talking about development, like the creation of conent, most likely a gaming card will perform do as much as you need to especially for games. The Quadro and the Tesla cards are more for the really high end where you have super high poly scenes to work with and you need more graphics memory than what's available in gaming cards. None of those are cases with game development.

    Also, the GPU has absolutely nothing to do with lightmass rendering. And again, in games they will perform much worse than a gaming card. As for GPU rendering in something like iRay or VrayRT you still actually render much faster with gaming cards, but the situation there is that gaming cards are limited on memory and an average user could easily max out the memory for GPU rendering and then it wouldn't help in rendering at all.

  3. #3
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by darthviper107 View Post
    It seems you're misunderstanding the use of those cards. The Tesla and Quadro cards aren't meant for gaming at all, they will suck hardcore for games. They're meant for use with programs like 3ds Max, Maya, Zbrush, Mudbox, etc.

    Even if you're talking about development, like the creation of conent, most likely a gaming card will perform do as much as you need to especially for games. The Quadro and the Tesla cards are more for the really high end where you have super high poly scenes to work with and you need more graphics memory than what's available in gaming cards. None of those are cases with game development.

    Also, the GPU has absolutely nothing to do with lightmass rendering. And again, in games they will perform much worse than a gaming card. As for GPU rendering in something like iRay or VrayRT you still actually render much faster with gaming cards, but the situation there is that gaming cards are limited on memory and an average user could easily max out the memory for GPU rendering and then it wouldn't help in rendering at all.
    did you read what i wrote, or did you skip to the end?
    i stated that the tesla and the quadro are realy poorly set up for gaming in general but excel overall for game content creation.
    also the Tesla works by offloading CPU performance to the GPU cores allowing the CUDA's to take all the processes and work faster, that means rendering light mass.

  4. #4
    MSgt. Shooter Person
    Join Date
    Apr 2011
    Location
    Vancouver, Canada
    Posts
    84

    Default

    When it comes to rendering Lightmass though, the faster / better option is send it to other PCs that aren't as occupied with other tasks via Swarm. If you're already using the workstation cards for other 3D work, then sure, but just for Lightmass you'd get a better bang for your buck setting up a render farm.

    It is a little confusing whether you're talking about a development configuration or a gaming PC, because APEX performance on your development PC isn't going to matter much.
    Last edited by Rogdor; 08-06-2012 at 11:35 PM.

  5. #5
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by Rogdor View Post
    When it comes to rendering Lightmass though, the faster / better option is send it to other PCs that aren't as occupied with other tasks via Swarm. If you're already using the workstation cards for other 3D work, then sure, but just for Lightmass you'd get a better bang for your buck setting up a render farm.

    It is a little confusing whether you're talking about a development configuration or a gaming PC, because APEX performance on your development PC isn't going to matter much.
    its just ment to be a small overview for people interested in getting in to using unreal professionally, iv refined it a tiny bit more as i looked in to what the Tesla GPU is best for so i added a small section towards the end to expand a little on what the Tesla gpu did

  6. #6

    Default

    Quote Originally Posted by pixxie_payne View Post
    its just ment to be a small overview for people interested in getting in to using unreal professionally, iv refined it a tiny bit more as i looked in to what the Tesla GPU is best for so i added a small section towards the end to expand a little on what the Tesla gpu did
    I read what you said, and Tesla can't be used for lightmass unless Epic codes for it, just like any GPU renderer. But it'd be a waste considering that lightmass needs a lot of memory to run, even a 6GB Tesla might not be enough---there's even been a few threads recently with people rendering and needing 10GB to do lightmass--that wouldn't work on Tesla. Certainly not worth the money at all, you're better off with a better CPU (cheaper too) and if you need to you could build a render farm.

    And again, it won't allow you to make content faster (or ALLOT faster as you said) but rather if you have high poly count and high object number scenes (in your 3D program) then it could help viewport performance--but game content creators won't have that issue.

    Even then, the only ones that are actually useful are the top end cards---it's not worth spending $4,000 on a Quadro 6000 for the small viewport performance increase.

  7. #7
    Marrow Fiend

    Join Date
    Jul 2006
    Location
    WorldInfo_61
    Posts
    4,198
    Gamer IDs

    Gamertag: KickedWhoCares

    Default

    Ive heard that the GTX670 is the best bang for buck atm might be worth getting acouple of those but it does entirely depend, you might need the 4gb of texture memory some GTX680's offer.

  8. #8
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by MonsOlympus View Post
    Ive heard that the GTX670 is the best bang for buck atm might be worth getting acouple of those but it does entirely depend, you might need the 4gb of texture memory some GTX680's offer.
    dependent on budget, im tempted to see if duel gtx 690's and a tesla c2075 will be worth trying, il ask at novatech to test it out for me

  9. #9
    God King
    Join Date
    Jan 2010
    Location
    Germany
    Posts
    4,134

    Default

    Retrospectively, if we weren't on the UDK forums, I'd say that modern nVidia cards are all bad for Unreal. Because time and again they manage to release drivers that mess the editors of older engine generations massively up and it takes ages or close to never until they actually fix it.
    One driver managed it somehow that a click in the 3D viewport of the editor took five seconds to actually register a hit in the 3D world. The latest one just gives me a nice lightshow on every viewport update that consists of all volumes, path connections and grid lines in the map, which are everywhere except where they actually should be.

    That just as a side note. I've been an nVidia fanboy for a long time, but all those troubles really make me consider if my next videocard purchase should not rather be a Radeonů
    Our Loop, which art in source code, hallowed be thy keyword.
    Thy condition come, thy instruction be done, in RAM as it is in cache.
    Increment us this day our daily counter,
    and forgive us our typos, as we also have forgiven our compilers.
    And lead us not to the nullpointer but deliver us from bugs.
    For thine is the API, the GUI, and the CLI while(true).
    Semicolon;
    Please don't send me questions about how to do something in the UDK via PM. That is better discussed in the forums and we only have limited PM storage.

  10. #10
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by Crusha K. Rool View Post
    Retrospectively, if we weren't on the UDK forums, I'd say that modern nVidia cards are all bad for Unreal. Because time and again they manage to release drivers that mess the editors of older engine generations massively up and it takes ages or close to never until they actually fix it.
    One driver managed it somehow that a click in the 3D viewport of the editor took five seconds to actually register a hit in the 3D world. The latest one just gives me a nice lightshow on every viewport update that consists of all volumes, path connections and grid lines in the map, which are everywhere except where they actually should be.

    That just as a side note. I've been an nVidia fanboy for a long time, but all those troubles really make me consider if my next videocard purchase should not rather be a Radeon…
    iv noticed allot less graphics issues with amd graphics cards, however the lack of PhysX and overall games design features make me move more towards Nvidia, however AMD gpu's are amazing at multimonitor displays compared to Nvidia

  11. #11
    Veteran
    Join Date
    Sep 2006
    Location
    Unreal Nomad
    Posts
    7,682
    Gamer IDs

    Gamertag: ambershee

    Default

    I use Nvidia cards with a cheap throwaway FirePro rather than use Radeons. It's cheaper and better.
    - Please do not send me questions regarding programming or implementing things in UDK via Private Message. I do not have time to respond and they are much better answered in the forums. -

  12. #12
    Boomshot
    Join Date
    May 2011
    Location
    Chicago
    Posts
    2,215

    Default

    If you're going with a newer GTX, go for the 670 or 680. The 690 is essentially two cards in one but not as good as the 680.

    Right now I'm using a GTX 580 and it's smooth as butter even now, so anything from that point on would be good.

  13. #13
    Marrow Fiend

    Join Date
    Jul 2006
    Location
    WorldInfo_61
    Posts
    4,198
    Gamer IDs

    Gamertag: KickedWhoCares

    Default

    I've just upgraded from an ati 5770 to an nvidia 560ti, definate improvement and I actually had driver issues with the ati card. With the huge jump the 670 is offering over the 570 (which doesnt happen often) youd be kinda crazy to not go with nvidia, drivers will be fixed eventually, theyve only just been released.

  14. #14

    Default

    Quote Originally Posted by pixxie_payne View Post
    overall my opinion and advice is strictly dependent on your overall budget for a graphics card, although the Gaming graphics card will work overall best for creating games, a Tesla will allow you to create content allot faster and give you more performance for testing, however the Quadro GPU's will allow you to create content ALLOT faster compared to both the Tesla and gaming standard graphics card.
    However it the Quadro falls during testing games due to its architecture, the over all in game optimisation for high graphics setting cause poor frame rates and give a lower feel to the end result, the Tesla has a the same but less of the same issue due to its Keplar GPU, but allows for better creation time and Light mass rendering due to its dedicated GPU computing system which utilised the Graphics processor for CPU based event.
    Although a Gaming Graphics card will be the overall better bet because of its much larger amount of CUDA cores and gaming based architecture, you will see a slight decrease in production time from content creation, light-mass calculation and overall render times.
    Lightmass is not GPU-accelerated.

    The only Tesla based on Kepler is the Tesla K10 -- the rest are based on Fermi (and older). The K10 is ~4000$. It also will not accelerate anything in Unreal Editor that I know of. PhysX is also fast enough for your needs on a decent GeForce GPU.

    GeForce cards will always be slightly faster than Quadro cards (albeit less tuned to professional applications). Unless you realllllly need something like Quad Buffered Stereoscopic 3D in your modeling applications -- or you're doing a lot of other things with your 3D content creation applications -- just get a really good GeForce card.

    Even the GeForce GTX690 might be a little overkill. I mean, what hardware are you really targeting with your game?
    Last edited by Phopojijo; 08-08-2012 at 02:47 PM.
    Core i7 920, Foxconn Renaissance, 6GB OCZ DDR3, GeForce GTX460 + GTX 260
    X-Fi Platinum, Bose QC15 Headphones, Windows 7 x86-64
    Rosewill RK-9000 (PS/2 Adapt. NKRO), Adesso Cybertablet 12000, LG Wide 20" LCD + Samsung 23" 1080p LED + LG Wide 22" Glossy LCD

  15. #15

    Default

    I use a GTX 680 4gb from EVGA, I would say that it runs better than the 2GB Counterpart and there also isn't a huge difference between the 670 as well.

    Keep in mind that not many people have high end graphics and by the time a large content worthy game is finished your graphics would be within the standard. Keeping up-to-date through development is good but also making sure that the consumer can play the game efficiently helps as well.

    I have a look at what technologies are being developed over the next 4 years and settled for a 6 series nvidia, I would also be pleased with simply upgrading by adding a card for sli to keep up to date.

    There aren't many games that will max out the 680gtx now that the drivers are starting to become more mature, something else to keep in mind.

  16. #16
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by AussieBacom View Post
    I use a GTX 680 4gb from EVGA, I would say that it runs better than the 2GB Counterpart and there also isn't a huge difference between the 670 as well.

    Keep in mind that not many people have high end graphics and by the time a large content worthy game is finished your graphics would be within the standard. Keeping up-to-date through development is good but also making sure that the consumer can play the game efficiently helps as well.

    I have a look at what technologies are being developed over the next 4 years and settled for a 6 series nvidia, I would also be pleased with simply upgrading by adding a card for sli to keep up to date.

    There aren't many games that will max out the 680gtx now that the drivers are starting to become more mature, something else to keep in mind.
    agreed, thats why i use a gtx 670 and have a 4830 aswell to see how it runs on dx9

  17. #17
    Marrow Fiend

    Join Date
    Jul 2006
    Location
    WorldInfo_61
    Posts
    4,198
    Gamer IDs

    Gamertag: KickedWhoCares

    Default

    4gb is a heap of memory for a gpu, it'd be interesting to see the stats on usages vs the 2gb in a variety of apps nand games.

  18. #18
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    i use a GTX 670 2gb for games production, gaming, graphics design and video editing.
    i also have a ATI 4830 for testing my work to see how lower end graphics handle everything, then i go back over to the 670 to tweak thing if they cant handle things better.
    also its nice to see how everything is without tessellation and PhysX enabled. allows me to keep everything looking nice for every1 when i hit the closed beta stages of my work

  19. #19

    Default

    Quote Originally Posted by MonsOlympus View Post
    4gb is a heap of memory for a gpu, it'd be interesting to see the stats on usages vs the 2gb in a variety of apps nand games.
    It depends on what you're doing--multi-display or 3D. 4GB would also be good for GPU renderers that have to load all data into memory, but it turns out the 600 GPU's aren't as good as the 500 GPU's for GPU rendering.

  20. #20
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by darthviper107 View Post
    It depends on what you're doing--multi-display or 3D. 4GB would also be good for GPU renderers that have to load all data into memory, but it turns out the 600 GPU's aren't as good as the 500 GPU's for GPU rendering.
    multiple display only get hit hard with extreme levels of graphics, 3d doesn't really effect it that much, i played in a l4d2 tourniment a few years back with multiple display and 3d vision, it was more anoying but that gpu being used wasnt as over kill as expected, but still managed to achieve a decent frame rate in the end

  21. #21

    Default

    Well, the increased resolution adds up in the GPU memory, which is why multi-display or 3D would need more memory.

  22. #22
    Marrow Fiend

    Join Date
    Jul 2006
    Location
    WorldInfo_61
    Posts
    4,198
    Gamer IDs

    Gamertag: KickedWhoCares

    Default

    This is true, the amount of ram will affect performance at higher resolutions. 4gb is a whole lot especially when you have the option for sli to do things like 3d and higher res, id actually guess though a single 4gb card might outperform 2x2gb in sli assuming the gpu fillrate isn't a bottleneck.

  23. #23
    MSgt. Shooter Person
    Join Date
    Sep 2010
    Location
    Bristol/ UK
    Posts
    344
    Gamer IDs

    Gamertag: modfringatoon

    Default

    Quote Originally Posted by MonsOlympus View Post
    This is true, the amount of ram will affect performance at higher resolutions. 4gb is a whole lot especially when you have the option for sli to do things like 3d and higher res, id actually guess though a single 4gb card might outperform 2x2gb in sli assuming the gpu fillrate isn't a bottleneck.
    all of the texture planes get put on to the first and second card, sli does nothing but increase the speed upon how it is loaded, 2x2gb gpu's dont = 4gb graphics card, just 2/4 gpu's with 2x2gb=2gb Graphics RAM

  24. #24

    Default

    I don't think he was saying that--I think what he's saying is that the added memory might allow a 4GB GPU to perform better than 2 2GB GPUs because it doesn't have to manage the memory so much.

  25. #25

    Default

    Quote Originally Posted by darthviper107 View Post
    I don't think he was saying that--I think what he's saying is that the added memory might allow a 4GB GPU to perform better than 2 2GB GPUs because it doesn't have to manage the memory so much.
    That's not how it works though. Each card does exactly the same thing together as they would have done apart. It will perform *exactly the same* in terms of memory.

    NVIDIA would have had to have changed something VERY recently for that to be not true... and I'm pretty sure they can't.
    Core i7 920, Foxconn Renaissance, 6GB OCZ DDR3, GeForce GTX460 + GTX 260
    X-Fi Platinum, Bose QC15 Headphones, Windows 7 x86-64
    Rosewill RK-9000 (PS/2 Adapt. NKRO), Adesso Cybertablet 12000, LG Wide 20" LCD + Samsung 23" 1080p LED + LG Wide 22" Glossy LCD

  26. #26
    Marrow Fiend

    Join Date
    Jul 2006
    Location
    WorldInfo_61
    Posts
    4,198
    Gamer IDs

    Gamertag: KickedWhoCares

    Default

    Yeah I was saying that 2x2gb cards only have a total of 2gb of memory so despite the fact they have more fill rate at higher resolutions (since they only do part of the image) they might not perform as well as a single 4gb card for these purposes. Lets face it though who is going to be running over 1080p or over 1080p 3D for gaming? We have options for AA that go far beyond the simple sampling methods we used to have which can be applied full screen, I know my 3D TV supports upscaling natively so on that end some filtering and minor enhancements can be done using the processor in modern hardware.

    In a dual graphics processor situation though you can have one dedicated to rendering each frame required for a 3d picture this includes doing the AA on each card, in this type of situation I do think that an sli setup of gtx670's 2gb would out perform a single gtx 680 4gb

  27. #27

    Default

    Quote Originally Posted by MonsOlympus View Post
    4gb is a heap of memory for a gpu, it'd be interesting to see the stats on usages vs the 2gb in a variety of apps nand games.
    I went for 4GB because I use all three screens to multi task, as well it lets me use all of my 3d editors at once without impacting to heavily on the UDK. Photoshop also likes to take large chunks of video memory when using some filters.

    Sometimes the applications use next to nothing, other times I'm stretching things to the limit.

    It also ensures that developing content that runs on a 4gb now means that when its time to release people will be able to run it on their machines "providing technology passes 4gb sometime"


    It depends on the job, but as an all rounder that will last the test of time I think the current 6series 4gb models or two in sli are perfect.

  28. #28
    Marrow Fiend

    Join Date
    Jul 2006
    Location
    WorldInfo_61
    Posts
    4,198
    Gamer IDs

    Gamertag: KickedWhoCares

    Default

    Well I know current consumer level technology has been hovering at the 1gb for over 5 years now so with nvidia's physX and newer 22nm process thats enough for most consumers to upgrade. I know as a developer it can be handy to have the extra resources with alittle always on hand but realistically consumers dont need that and would be looking for games that use all their (lesser) resources as a foreground app.


 

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Copyright ©2009-2011 Epic Games, Inc. All Rights Reserved.
Digital Point modules: Sphinx-based search vBulletin skin by CompletevB.com.