Ryan-M

Chief Architect
  • Posts

    40
  • Joined

  • Last visited

Posts posted by Ryan-M

  1. 4 hours ago, stevenyhof said:

    How is this for materials and texture mapping? I knew Chief had it in it. Was a nice way to spend a rainy Saturday afternoon.

     

    My posts and beams are solids with solids subtracted on the edges to split the wood. The roof and a few other textures are using all the map fields. The stone are terrain sidewalk polys broken up to allow the grass through. I used my grunge images on the brick walls and the deck floor. Chiefs grass is very nice, but their plants pull the realistic look out of the picture. 

     

    My background is not perfect yet, but my granddaughter is bugging me to play with her. So I will fix this up later.

     

     

    This is cool, thanks for sharing.  Adding more variety to the grass tool (and expanding it to other types of similar foliage) is something we've talked about a lot, and these are good examples of what it could do.

    • Like 1
  2. It causes the view to stop rendering when it reaches the specified number of Max Export Samples.  In X15 the view will denoise at this point.  If it's left unchecked the view will continue to render indefinitely.

    • Upvote 1
  3. So, this is basically a bug.

     

    The way that we apply lighting to surfaces viewed through a glass (or any refractive) material is different than the way we apply lighting to surfaces that are directly visible.  In X13 - X15 there is a fixed limit on the number of lights that we apply to these kinds of surfaces.  This means that some of those lights inside of the cabinet may not actually produce light on the surfaces visible through the cabinet glass.  I can certainly understand that this is confusing and not the ideal behavior.

     

    This is unlikely to be problem in newer versions of Chief.

    • Upvote 1
  4. 2 hours ago, CarolinaDecks said:

    I'm looking for advice on how to best render an addition where the addition is in one file, and the original house is in another file but visible as a reference file. This was my first time trying this approach (the purpose was to be able to split out new framing from old framing, etc.) and it works great - until I went to Ray Trace the whole project. Is there a way to make the reference file visible in the ray trace, or is there a better way to combine the files at the end for the final rendering? Thanks! 

    If you're talking about CPU ray tracing, then reference plan geometry won't display.  If you're talking about ray traced PBR then it should display.

     

    To get it to show up in a CPU ray trace you could export the reference plan as a symbol (e.g. via 3DS export) and import it into the plan you want to ray trace from.  This assumes you want your reference geometry to show up with materials.  If you want it rendered in a different technique (like Glass House) then I'm not sure you can do that with CPU ray tracing.

    • Upvote 1
  5. 3 minutes ago, VHampton said:

    This was a Staff recommendation:image.thumb.png.d7432d9d03052ab6be3cb0d8429b992f.png

    This is in regards to CPU Ray Tracing.  It doesn't sound to me like that is what the original poster is trying to do, but they can correct me if I'm wrong.

     

    I will update my post, though, as you're right that there is a valid reason in X14 to use Rosetta (though I still wouldn't recommend it unless you're doing CPU Ray Traces).  Thanks for pointing this out.  This doesn't apply to X15, where we have fixed the problem that this was working around in X14.

  6. It should not be necessary to use Rosetta for X15, and I would encourage you to contact tech support before trying to solve this in that way as running with Rosetta will negatively impact performance.  There was an issue with the CPU Ray Tracer executable in X14, but I believe that has been fixed in X15 and it wouldn't have anything to do with opening objects or creating camera views.

     

    We have an abundance of M1 laptops and a few M2 laptops in-house that we don't have issues with, so I'm not sure what to suggest at this point except that contacting tech support is your best bet.

     

    Edit: As @VHamptom has pointed out, you may need to use Rosetta in X14 you're using CPU Ray Tracing. This is fixed in X15.

  7. 7 hours ago, SNestor said:

    @LevisL - I don’t believe that RTRT is supported for the MAC. That could be the reason your renders don’t look great.
    Apple uses “metal” as their graphics engine and I don’t believe Chief has optimized their software to use this. I could be mistaken….maybe someone from Chief could jump in here and set us all straight?

    As of X13 we do use Metal.  We also run natively on ARM as of X14.  Ray tracing is independent of both of these things.

     

    Mac ray tracing is something we regularly evaluate.  Here is why it hasn't happened yet:

    • When it comes to GPU code for non-ray tracing functionality, we're able to author it such that it "just works" on both platforms. This doesn't apply to ray tracing. Without going into too much detail, this is a considerable technical problem that doesn't have a great solution right now.
    • There is no hardware acceleration for ray tracing on any existing Apple hardware.  "Hardware acceleration" means dedicated hardware for tracing rays, e.g. the kind of hardware present in NVIDIA RTX and AMD RX 6xxx GPU's.  This doesn't mean it's impossible to perform ray tracing on Apple hardware, but it does mean it will do so much more slowly than other hardware.

    The two major takeaways here are:

    1. Right now, the performance we would be able to get on this hardware makes it very difficult to justify the implementation and maintenance cost (which is very high).
    2. Even if the hardware becomes available tomorrow, we wouldn’t be able to flip a switch and make it work on the Mac. There’s substantial effort involved to get Chief to the point that it can leverage said hardware, and this would involve the planning and budgeting of engineering time well in advance.

    Here is an article that compares M1 GPU ray tracing performance against various CPU's and GPU's.  The short of it is that the M1 is between 30x and 40x slower than an RTX 2070 (mid-range first-generation NVIDIA card).  Granted, this is the base M1 model.  If we assume the M1 Max is in fact 4x faster than an M1 as seems to be the claim from Apple, then it's still upwards of 10x slower (at tracing rays) than a mid-range PC GPU from 2018.  Modern ray tracing-capable cards have improved in ray tracing performance comparably to the M1 Max improvement over M1, so we're still looking at a 30-40x performance difference between a modern Mac and a modern PC.

     

    This is a topic that we internally discuss routinely. It's a complex business decision, not something we're withholding for arbitrary reasons.  The equation changes over time, and we’ll continue to evaluate where we’re at as new technology becomes available and our PBR implementation evolves.
     

    • Like 1
    • Upvote 4
  8. In the Adjust Lights Dialog, switch to using a Light Set and you can explicitly control which lights are on.  When using Automatic lighting, Chief will only turn on lights on the floor that the camera is on, starting in the room that the camera is in when a rebuild or refresh last occurred.

     

    image.thumb.png.69d8facd96dedc950de7493fa2172ff4.png

    • Like 1
    • Upvote 1
  9. When using automatic lighting, Chief will re-evaluate what lights are turned on each time the model is rebuilt.

     

    Chief will also turn off the sun light, if shadows are turned off, when your camera is inside a room.

     

    In this case the "room" is outside, and it would be reasonable to make the argument that we shouldn't turn off the sun in this case.

    • Upvote 1
  10. I can’t provide a roadmap as that is not my role, but I can weigh in at least on the GPU-oriented parts of the discussion as that is my role at Chief.

     

    I agree with @ghitchens’s statement that “integrated” vs. “discrete” is no longer an adequate way to describe these GPUs, particularly in the context of the newly announced hardware.  While it may be literally true (depending on how we define integrated, I guess), the integrated vs. discrete distinction connotes certain performance characteristics that no longer seem to apply in the case of modern Apple hardware.

     

    Personally, I am excited to see Apple drastically improving their graphics performance.  Better hardware translates directly to a better experience for our users, not to mention us as developers.  However, it’s important to recognize that “performance” is not a monolithic quantity.  The most frequent question(s) we get from users regarding graphics and rendering on the Mac is when ray tracing will work and why it doesn’t right now.  Bear in mind that the frequency with which we hear this question is likely a factor in how Scott responded above.  In looking at the graphs Apple presented during their announcement it would appear that M1 Pro/Max GPUs are on par with fairly recent discrete GPUs, but the graphs don’t provide a lot of information as far as what was being benchmarked.  M1 GPUs do not provide hardware support for ray tracing.  Their rasterization and compute performance appears to be on-par with good discrete cards, which is fantastic, but the base M1 is 30-40x slower than entry-level RTX cards when it comes to ray tracing throughput and the Pro/Max improvements are unlikely to bridge that gap. DirectX combined with ray tracing hardware has given us the tools that make it comparatively simple to support real-time ray tracing and Metal/Mac hardware has not yet done so.  We will, of course, continue to evaluate whether or not we’re able to satisfy our performance requirements on new hardware as it becomes available (including the M1 Pro/Max).

     

    Regarding compiling Chief for native arm64, @ghitchens's suspicion is accurate.  Chief leverages a large number of libraries that we need to be able to compile for arm64; Qt is one of these as they have pointed out, but it is far from the only one.  Some apps have likely been able to flip a switch in XCode and be in good shape to run natively on Apple Silicon, but this is very much not the case for Chief.  That said, it’s certainly something we are aware of and it is being actively evaluated.

     

    As far as the overall question in this post, Chief does (to the best of my knowledge) work on M1 and I expect that the experience will only improve as the hardware gets better, as our support for the hardware gets better, and from a graphics perspective as we are able to iterate on our Metal implementation that was only introduced this version.

    • Like 1
  11. 1 minute ago, Dmerrick said:

    They told me, Open GL 3.3 is  processor based rather than a video card task. Processors support a given level of GL and can not be upgraded with a new video card.

     

    This is not correct.  The version of OpenGL that your computer is capable of running is dictated entirely by your video card and the drivers for that video card.

     

    • Upvote 1
  12. To clarify, Chief does not take advantage of workstation (Quadro) graphics cards.  It is likely that you will see significantly worse performance in Chief going from a GTX 980 to an older (or even a modern) Quadro card.  Also note that your graphics card has absolutely no impact on ray tracing in Chief.  Upgrades to your graphics card will only effect render views.

    • Upvote 1
  13. 3 hours ago, Kduke11 said:

    This is a real issue as my cursor is jittery on my plugable 3.0 station but only after I open a 3D view. If I don't open a 3D view the cursor is fine. Chief does not know why I only show 1GB when I have 4GB video card. They think the jitter has to do with the video card showing only 1GB. X8 works fine no issues even with 1GB showing. This is an X9 issue.

    What Plugable 3.0 station are you referring to?  Is it something like this? http://plugable.com/products/ud-3900/  These kinds of docking stations that utilize USB graphics do not generally support high performance graphical applications very well (as is noted in the Gaming section in the link).  The core of our rendering technology has changed significantly in X9, which is likely why you see a difference in behavior from X8.

     

     

  14.  

    17 minutes ago, Kduke11 said:

    Just got a new Asus ROG laptop with GTX 1050Ti 4GB DDR5 and in Chief all it shows in the Render Video Card Status is 1GB. How do I change it to use the full 4GB??

     

    This is a bug only in how Chief displays the available amount of video memory.  It will not impact how much video memory is actually utilized by Chief.

    • Upvote 1
  15. 2 hours ago, michaelgia said:

    Ok so here is a vector view in X9. 

    It looked fine in X8. 

    I merely opened the plan in X9. 

    Anyone know what's going on?

     

    Screenshot 2017-01-20 12.53.37.png

     

    We are aware of issues effecting a small subset of Mac hardware.  Can you tell me what graphics card your Mac has?  You can find this information in Preferences -> Render -> Video Card Status.

  16. Chief is not optimized in any way for workstation cards and we don't put any additional emphasis on developing or testing for them relative to what we do for gaming cards.  Generally speaking the hardware between workstation cards and gaming cards is very similar, but the drivers are optimized for very specific operations and often times explicitly for certain applications.  Chief stands to gain very little, if anything, from these optimizations.  The raw throughput that a gaming card is capable of is likely to be superior in the context of Chief.  Both gaming cards and workstation cards implement OpenGL well.

     

    There may be differences in the expected lifetime of the GPU and the support provided by the vendor for the workstation level cards, but I have little knowledge in this area.

    • Upvote 1
  17. We have reproduced the graphics related crash in house and believe we have it fixed.

     

    The fixed issue affects graphics cards that support a maximum OpenGL version of 3.0 or 3.1.  This includes Intel HD Graphics 2000/3000 (with any driver), Intel HD Graphics 4000 (with outdated drivers), as well as a variety of older ATI/NVIDIA cards.

  18. You shouldn't use hardware edge smoothing and software edge smoothing at the same time.  Software edge smoothing is capable of achieving better results than hardware edge smoothing but will dramatically decrease performance as it basically involves rendering each frame several times (between 2 and 15 times, depending on the level the preference is at).  You can therefore expect performance to drop by a factor of roughly between 2 and 15 (again depending on the preference).

     

    Unless you're using a very low end GPU that either doesn't support mid range MSAA or supports it but suffers dramatic performance loss I would recommend using only hardware edge smoothing for previews and bumping up software edge smoothing for final views.