Recommended Computer requirements


paulchoate
 Share

Recommended Posts

3 hours ago, HumbleChief said:

Today's video cards are so powerful and Chief barely strains them even with large models. I my opinion $900 is a ridiculous amount of money to upgrade to a card that you will most likely not see noticeably better performance. Yes you will have a better, faster, video card and you won't have to upgrade for a while but the lesser 1070 will also last you years before you need to upgrade, and when you do, you will most likely upgrade the entire machine and every component will be better, faster and a better value.

 

To Chop's point above I upgraded from an older 780 (I think) to the 1080 and noticed no increase in performance other than the placebo affect of knowing I had the latest and greatest video card.

Well, the 1080 tests so much better than the 780 on paper. Your problem is probably something else, like your motherboard or your Zeon's so you don't see it.

Link to comment
Share on other sites

6 minutes ago, DRAWZILLA said:

Well, the 1080 tests so much better than the 780 on paper. Your problem is probably something else, like your motherboard or your Zeon's so you don't see it.

 

That's the challenge, getting ones system properly balanced so you don't have some bottleneck that deprives you from obtaining the potential performance you are paying for. Many of those tests, especially on gamming graphics cards, are demonstrating performance functions that CA does not utilize and as such they don't hold much relevance as to how CA will actually perform or what degree of benefit you may realize when considering an upgrade.

 

I believe that the Chandelier stress test I submitted a short while ago demonstrated this fact. All users who ran this encountered similar results regardless of their graphics cards performance specs. The bottleneck appears to be that CA does a lot of CPU calculations, especially for a rebuild, before handing things over to the GPU, as such the GPU just sits idle while it waits for something to do.

 

Link to comment
Share on other sites

10 minutes ago, TheKitchenAbode said:

 

That's the challenge, getting ones system properly balanced so you don't have some bottleneck that deprives you from obtaining the potential performance you are paying for. Many of those tests, especially on gamming graphics cards, are demonstrating performance functions that CA does not utilize and as such they don't hold much relevance as to how CA will actually perform or what degree of benefit you may realize when considering an upgrade.

 

I believe that the Chandelier stress test I submitted a short while ago demonstrated this fact. All users who ran this encountered similar results regardless of their graphics cards performance specs. The bottleneck appears to be that CA does a lot of CPU calculations, especially for a rebuild, before handing things over to the GPU, as such the GPU just sits idle while it waits for something to do.

 

I believe your chandelier test had too many variables to be totally accurate, like all the different computer systems used for the test, very hard to say for sure unless you use the same systems. Just my opinion but thanks for all your hard work on that, it was very interesting for sure.

Link to comment
Share on other sites

6 minutes ago, DRAWZILLA said:

I believe your chandelier test had too many variables to be totally accurate, like all the different computer systems used for the test, very hard to say for sure unless you use the same systems. Just my opinion but thanks for all your hard work on that, it was very interesting for sure.

 

I was truly surprised by the results, was expecting to see a much greater spread of results from the differing systems. But after contemplating the results I believe what it was demonstrating was that the CPU based operations were where we ran into a wall. As such our GPU's were not being unleashed to show what they could really do. From what I can recall, though are systems varied, we all had reasonably good CPU's and more importantly most of our CPU's had very similar single threaded performance specs. I'm reasonably convinced that this is why there was so little difference between our results. What was actually being tested was CA's 3D model rebuild operation which is a CPU operation not a GPU one.

Link to comment
Share on other sites

17 minutes ago, TheKitchenAbode said:

 

I was truly surprised by the results, was expecting to see a much greater spread of results from the differing systems. But after contemplating the results I believe what it was demonstrating was that the CPU based operations were where we ran into a wall. As such our GPU's were not being unleashed to show what they could really do. From what I can recall, though are systems varied, we all had reasonably good CPU's and more importantly most of our CPU's had very similar single threaded performance specs. I'm reasonably convinced that this is why there was so little difference between our results. What was actually being tested was CA's 3D model rebuild operation which is a CPU operation not a GPU one.

To me, your test confirmed what I had already come to assume about chief architect and hardware implementation. That the problem is in the coding not in the hardware, there is some refinement yet to be made. Ever notice how apple can squeeze new snappiness into an old phone with a software upgrade just from trimming the fat and better coding.

 

My systems are both powerful, but my desktop should really stand out, yet both systems had the same results in your test, which were in the margin of everyone else's tests. When I have a high-poly scene being bogged down in chief, the same scene imported into Thea will work beautifully...that says something for sure!

Link to comment
Share on other sites

2 hours ago, DRAWZILLA said:

Well, the 1080 tests so much better than the 780 on paper. Your problem is probably something else, like your motherboard or your Zeon's so you don't see it.

I actually never had a problem with 3D performance, I just thought I did, and thought the 1080 would speed things up but it had no effect - again on 3D performance. And no doubt the 1080 tests so much better than the 780 but the 780 simply moves enough data fast enough to not see a difference in every day use for me on my system.

 

It's most likely the single thread performance of my Xeons that is causing a bottle neck in Chief but the 3D performance has been very good to excellent, even with the older 780. One thing that owning a 1080 will do is give someone peace of mind knowing they have the fastest video card around and that can be valuable. One just has to measure that peace of mind against the actual cost/performance/value equation to have that peace of mind and whether peace of mind is an objective measure for computer video card performance.

 

For me it's actually a toss up. I'm really happy I have the 1080 because it means I don't have to look at my video card for any performance boost. Would I take my own advice and buy a 1070 or any other lesser card next time? Dunno.

 

  • Upvote 1
Link to comment
Share on other sites

2 hours ago, Renerabbitt said:

To me, your test confirmed what I had already come to assume about chief architect and hardware implementation. That the problem is in the coding not in the hardware, there is some refinement yet to be made. Ever notice how apple can squeeze new snappiness into an old phone with a software upgrade just from trimming the fat and better coding.

 

My systems are both powerful, but my desktop should really stand out, yet both systems had the same results in your test, which were in the margin of everyone else's tests. When I have a high-poly scene being bogged down in chief, the same scene imported into Thea will work beautifully...that says something for sure!

So sorry to agree with you. Not sorry about the agreement but sorry that Chief will, in essence, never really be fast no matter how much hardware you are willing to throw at it. I have recently used some high end software from other vendors, in this case AutoDesk Inventor and Solidworks and you can feel the investment the programmers and by definition the company has put into the software, something I think you also feel when using Chief but in a not so good way.

  • Upvote 1
Link to comment
Share on other sites

  • 2 weeks later...
On ‎6‎/‎8‎/‎2017 at 6:31 PM, paulchoate said:

I like the 15" 4k screen on my Lenovo Yoga BUT I also like the 17" no-glare screen on my Asus G752.  I want to merge them together so that I have a 17" 4k, anti-glare shiny screen. Ain't gonna happen lol. Seriously, the anti glare is a bit "dull" but it's nice. I believe it's easier on the eyes when looking at the screen for a long time.  On the flip side the The 4k image on a quality ray trace looks awesome...customers like that. So, I do most of my work on the large, no glare 17" screen then bring the smaller 4k screen to show my customers. Plus my Lenovo is a touch screen which I don't use often but it's nice to have.

Follow up...going to return the 15.6" 4k laptop computer. The resolution is nice but there are just too many times I need to adjust the screen size because the text is tooo small but the icons too large or vice versa. It seems 4k really isn't optimal until the monitor is at least 30". So, back to the standard HD screen!

 

Link to comment
Share on other sites

5 hours ago, paulchoate said:

Follow up...going to return the 15.6" 4k laptop computer. The resolution is nice but there are just too many times I need to adjust the screen size because the text is tooo small but the icons too large or vice versa. It seems 4k really isn't optimal until the monitor is at least 30". So, back to the standard HD screen!

 

there are global or per application scaling options built into windows, I needed to adjust all of them for my 4k screen and now I'm set

  • Upvote 1
Link to comment
Share on other sites

Upgrading my desktop. I'm okay spending more (option 3) if it significantly improves speed on "normal" drawing tasks and cuts down significantly on Ray Trace time. But if a faster 4-core CPU (7700K) is just as fast overall  as a slower 8-core CPU  (AMD Ryzen 1800x or Intel 7820x) (going by clock speed) then why pay more for the 8 core? I was told by a CA rep that "the more cores the better" for ray tracing and that the NVidia cards are better than Quadros and the I7 CPUs are a better choice than Xeon (as a nother CA member also pointed out).  Other than the CPU and GPU the three computers are all on par with each other (as far as I can see). The hard drive sizes are all plenty big enough for me. Don't know if the 1080 TI is way overkill as I'm not a gamer but I will have a two or three monitor setup.

 

Choices are three computer setups as follows:

 

Option 1 specs:

Intel 7700K CPU (4-core @ 4.2 GHz)

Nvidia 1080 8GB  graphics card

240 GB SSD

1 TB hard drive

$1,500

 

Option 2 specs:

AMD Ryzen 7 1800X CPU (8-core, 3.6 GHz)

Nvidia 1080 8GB graphics card

240 GB SSD

2 TB HD

$1,900

 

Option 3 specs:

Intel 7820x CPU (8-core, 3.6 GHz)

Nvidia 1080 11GB TI graphics card

480 SSD

3 TB HD (7200 RPM)

$2,650 

 

Any input will be very much appreciated.

 

 

 

 

 

Link to comment
Share on other sites

7 minutes ago, paulchoate said:

Upgrading my desktop. I'm okay spending more (option 3) if it significantly improves speed on "normal" drawing tasks and cuts down significantly on Ray Trace time. But if a faster 4-core CPU (7700K) is just as fast overall  as a slower 8-core CPU  (AMD Ryzen 1800x or Intel 7820x) (going by clock speed) then why pay more for the 8 core? I was told by a CA rep that "the more cores the better" for ray tracing and that the NVidia cards are better than Quadros and the I7 CPUs are a better choice than Xeon (as a nother CA member also pointed out).  Other than the CPU and GPU the three computers are all on par with each other (as far as I can see). The hard drive sizes are all plenty big enough for me. Don't know if the 1080 TI is way overkill as I'm not a gamer but I will have a two or three monitor setup.

 

Choices are three computer setups as follows:

 

Option 1 specs:

Intel 7700K CPU (4-core @ 4.2 GHz)

Nvidia 1080 8GB  graphics card

240 GB SSD

1 TB hard drive

$1,500

 

Option 2 specs:

AMD Ryzen 7 1800X CPU (8-core, 3.6 GHz)

Nvidia 1080 8GB graphics card

240 GB SSD

2 TB HD

$1,900

 

Option 3 specs:

Intel 7820x CPU (8-core, 3.6 GHz)

Nvidia 1080 11GB TI graphics card

480 SSD

3 TB HD (7200 RPM)

 

Any input will be very much appreciated.

 

 

 

 

 

Anyone willing to do a speed text against my Xeon for rendering a same plan with same settings? I'm just as curious but my hunch is that I beat out the 7700 by a decent spread...could be totally wrong though

 

ALSO, and this is a BIG also...get a motherboard that supports m.2, the speed of the samsung 960 is SOOOOOOO much faster than any other comparable SATA hard drive

  • Upvote 1
Link to comment
Share on other sites

It's fairly easy to estimate the performance one could expect when comparing frequency and cores, the relationship is fairly linear. If one system runs at twice the frequency as another it can have half the cores and be about equal to the lower frequency system with twice the core count.

 

That 14 core Xeon 2695 v3 has a lot of cores so even though a 4 core (8 thread) system might run at a higher frequency that shear number of cores would likely result in at least a 50% improvement in Ray Trace through put time over say my I7 6700K.

 

I know I'm a bit repetitive on this issue but there are other ways to seed up your Ray Traces without having to spend a ton of money on your CPU.

 

This scene ran in 3 minutes, 1200 X 600 px. Just can't justify spending another $1,000 to save maybe 90 seconds.

5954049971de2_Untitled1_lzn.thumb.jpg.c04c108cf7f940f76bfb821e4f2ae854.jpg

  • Upvote 1
Link to comment
Share on other sites

16 minutes ago, TheKitchenAbode said:

It's fairly easy to estimate the performance one could expect when comparing frequency and cores, the relationship is fairly linear. If one system runs at twice the frequency as another it can have half the cores and be about equal to the lower frequency system with twice the core count.

 

That 14 core Xeon 2695 v3 has a lot of cores so even though a 4 core (8 thread) system might run at a higher frequency that shear number of cores would likely result in at least a 50% improvement in Ray Trace through put time over say my I7 6700K.

 

I know I'm a bit repetitive on this issue but there are oth2r ways to seed up your Ray Traces without having to spend a ton of money on your CPU.

 

This scene ran in 3 minutes, 1200 X 600 px. Just can't justify spending another $1,000 to save maybe 90 seconds.

5954049971de2_Untitled1_lzn.thumb.jpg.c04c108cf7f940f76bfb821e4f2ae854.jpg

Thanks for your input.  This ray trace ran for about 6 hours on a 7700HQ (4-core 2.8 GHz) CPU. And yes, I know I can shut off layers, objects, etc. when Ray Tracing to reduce the time it takes but I'd prefer to be able to leave everything on. Kind of like you can reduce a car's weight and change gearing too improve speed/performance OR just build more horse power. In this case I'd like too simply build more horse power.  I'd like to reduce my ray tracing from 2-4 hours to 15 - 30 minutes.

 

Ray Trace 2.jpg

Link to comment
Share on other sites

1 minute ago, paulchoate said:

Thanks for your input.  This ray trace ran for about 6 hours on a 7700HQ (4-core 2.8 GHz) CPU. And yes, I know I can shut off layers, objects, etc. when Ray Tracing to reduce the time it takes but I'd prefer to be able to leave everything on. Kind of like you can reduce a car's weight and change gearing too improve speed/performance OR just build more horse power. In this case I'd like too simply build more horse power.  I'd like to reduce my ray tracing from 2-4 hours to 15 - 30 minutes.

 

Ray Trace 2.jpg

 

If that scene took 6 hours then there is no way you can reduce the time to 30 minutes by increasing the horsepower. Even trying to reduce the time from 2-4 hours to 15 minutes is not really going to be possible. You can do the calculation based on your current processor. It has 8 logical cores, to go from 4 hours to two hours will require 16 logical cores, to get that two hours to one hour will require 32 cores, to get that one hour to 30 minutes will require 64 cores. And to get that 30 minutes to 15 minutes will require 128 cores. 

 

Also, the scene I posted does not have anything turned off. There are 64 active lights turned on when it was run.

Link to comment
Share on other sites

I'm aware that I won't be able to get a high quality 6 hour ray trace down to 15 minutes...I simply want to significantly cut the time down. That 15 minute sentence of mine wasn't really meant to be taken literally...though if it can be done I'd love to be able to make it happen.

Link to comment
Share on other sites

2 minutes ago, paulchoate said:

I'm aware that I won't be able to get a high quality 6 hour ray trace down to 15 minutes...I simply want to significantly cut the time down. That 15 minute sentence of mine wasn't really meant to be taken literally...though if it can be done I'd love to be able to make it happen.

 

You could likely make it happen by using a different lighting technique. The biggest culprit is those point lights, then environmental lighting and caustics. If you could find ways to eliminate having to use those items you would likely reduce those times by a factor of 3 or 4. I never use those items, all of my lighting is done with spot lights.

Link to comment
Share on other sites

29 minutes ago, TheKitchenAbode said:

 

You could likely make it happen by using a different lighting technique. The biggest culprit is those point lights, then environmental lighting and caustics. If you could find ways to eliminate having to use those items you would likely reduce those times by a factor of 3 or 4. I never use those items, all of my lighting is done with spot lights.

 

27 minutes ago, paulchoate said:

Thanks. Good to know. I still want more horsepower though lol

just did a test scene for those who are curious

see the computers in my sig

xeon- 28 passes at 33 min

6700HQ -28 passes 1 hour 17 min

5954189a16a40_TestRender33min28pass.thumb.jpg.1dfd727e6fd60c3287e10126288a59e3.jpgproof2.thumb.PNG.1d878e4b3dd2b80d607627d0229fd2e0.PNGproof.thumb.PNG.055026763ba750bfbbdc56707cf6a9fe.PNG

Link to comment
Share on other sites

7 minutes ago, Renerabbitt said:

 

just did a test scene for those who are curious

see the computers in my sig

xeon- 28 passes at 33 min

6700HQ -28 passes 1 hour 17 min

5954189a16a40_TestRender33min28pass.thumb.jpg.1dfd727e6fd60c3287e10126288a59e3.jpgproof2.thumb.PNG.1d878e4b3dd2b80d607627d0229fd2e0.PNGproof.thumb.PNG.055026763ba750bfbbdc56707cf6a9fe.PNG

 

That seems to be in line with what one would expect when projecting on the time benefit based on the specs of those two processors.

 

What would be interesting is to see how the times would be if you turned off those point lights.

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share