Ryan-M

Chief Architect
  • Posts

    40
  • Joined

  • Last visited

Everything posted by Ryan-M

  1. This is cool, thanks for sharing. Adding more variety to the grass tool (and expanding it to other types of similar foliage) is something we've talked about a lot, and these are good examples of what it could do.
  2. It causes the view to stop rendering when it reaches the specified number of Max Export Samples. In X15 the view will denoise at this point. If it's left unchecked the view will continue to render indefinitely.
  3. So, this is basically a bug. The way that we apply lighting to surfaces viewed through a glass (or any refractive) material is different than the way we apply lighting to surfaces that are directly visible. In X13 - X15 there is a fixed limit on the number of lights that we apply to these kinds of surfaces. This means that some of those lights inside of the cabinet may not actually produce light on the surfaces visible through the cabinet glass. I can certainly understand that this is confusing and not the ideal behavior. This is unlikely to be problem in newer versions of Chief.
  4. My best guess based on what you've described is that your machine is overheating. Chief really shouldn't be able to generate a BSOD even if it wanted to. It's unlikely we'll be able to diagnose a problem with Chief here.
  5. If you're talking about CPU ray tracing, then reference plan geometry won't display. If you're talking about ray traced PBR then it should display. To get it to show up in a CPU ray trace you could export the reference plan as a symbol (e.g. via 3DS export) and import it into the plan you want to ray trace from. This assumes you want your reference geometry to show up with materials. If you want it rendered in a different technique (like Glass House) then I'm not sure you can do that with CPU ray tracing.
  6. This is in regards to CPU Ray Tracing. It doesn't sound to me like that is what the original poster is trying to do, but they can correct me if I'm wrong. I will update my post, though, as you're right that there is a valid reason in X14 to use Rosetta (though I still wouldn't recommend it unless you're doing CPU Ray Traces). Thanks for pointing this out. This doesn't apply to X15, where we have fixed the problem that this was working around in X14.
  7. It should not be necessary to use Rosetta for X15, and I would encourage you to contact tech support before trying to solve this in that way as running with Rosetta will negatively impact performance. There was an issue with the CPU Ray Tracer executable in X14, but I believe that has been fixed in X15 and it wouldn't have anything to do with opening objects or creating camera views. We have an abundance of M1 laptops and a few M2 laptops in-house that we don't have issues with, so I'm not sure what to suggest at this point except that contacting tech support is your best bet. Edit: As @VHamptom has pointed out, you may need to use Rosetta in X14 you're using CPU Ray Tracing. This is fixed in X15.
  8. As of X13 we do use Metal. We also run natively on ARM as of X14. Ray tracing is independent of both of these things. Mac ray tracing is something we regularly evaluate. Here is why it hasn't happened yet: When it comes to GPU code for non-ray tracing functionality, we're able to author it such that it "just works" on both platforms. This doesn't apply to ray tracing. Without going into too much detail, this is a considerable technical problem that doesn't have a great solution right now. There is no hardware acceleration for ray tracing on any existing Apple hardware. "Hardware acceleration" means dedicated hardware for tracing rays, e.g. the kind of hardware present in NVIDIA RTX and AMD RX 6xxx GPU's. This doesn't mean it's impossible to perform ray tracing on Apple hardware, but it does mean it will do so much more slowly than other hardware. The two major takeaways here are: Right now, the performance we would be able to get on this hardware makes it very difficult to justify the implementation and maintenance cost (which is very high). Even if the hardware becomes available tomorrow, we wouldn’t be able to flip a switch and make it work on the Mac. There’s substantial effort involved to get Chief to the point that it can leverage said hardware, and this would involve the planning and budgeting of engineering time well in advance. Here is an article that compares M1 GPU ray tracing performance against various CPU's and GPU's. The short of it is that the M1 is between 30x and 40x slower than an RTX 2070 (mid-range first-generation NVIDIA card). Granted, this is the base M1 model. If we assume the M1 Max is in fact 4x faster than an M1 as seems to be the claim from Apple, then it's still upwards of 10x slower (at tracing rays) than a mid-range PC GPU from 2018. Modern ray tracing-capable cards have improved in ray tracing performance comparably to the M1 Max improvement over M1, so we're still looking at a 30-40x performance difference between a modern Mac and a modern PC. This is a topic that we internally discuss routinely. It's a complex business decision, not something we're withholding for arbitrary reasons. The equation changes over time, and we’ll continue to evaluate where we’re at as new technology becomes available and our PBR implementation evolves.
  9. In the Adjust Lights Dialog, switch to using a Light Set and you can explicitly control which lights are on. When using Automatic lighting, Chief will only turn on lights on the floor that the camera is on, starting in the room that the camera is in when a rebuild or refresh last occurred.
  10. If you turn shadows on the sun will remain on and the change will be less drastic. There will still be some change because we place a default light in rooms that don't have any lights in them, and that will turn on when your camera is determined to be inside that room.
  11. When using automatic lighting, Chief will re-evaluate what lights are turned on each time the model is rebuilt. Chief will also turn off the sun light, if shadows are turned off, when your camera is inside a room. In this case the "room" is outside, and it would be reasonable to make the argument that we shouldn't turn off the sun in this case.
  12. I can’t provide a roadmap as that is not my role, but I can weigh in at least on the GPU-oriented parts of the discussion as that is my role at Chief. I agree with @ghitchens’s statement that “integrated” vs. “discrete” is no longer an adequate way to describe these GPUs, particularly in the context of the newly announced hardware. While it may be literally true (depending on how we define integrated, I guess), the integrated vs. discrete distinction connotes certain performance characteristics that no longer seem to apply in the case of modern Apple hardware. Personally, I am excited to see Apple drastically improving their graphics performance. Better hardware translates directly to a better experience for our users, not to mention us as developers. However, it’s important to recognize that “performance” is not a monolithic quantity. The most frequent question(s) we get from users regarding graphics and rendering on the Mac is when ray tracing will work and why it doesn’t right now. Bear in mind that the frequency with which we hear this question is likely a factor in how Scott responded above. In looking at the graphs Apple presented during their announcement it would appear that M1 Pro/Max GPUs are on par with fairly recent discrete GPUs, but the graphs don’t provide a lot of information as far as what was being benchmarked. M1 GPUs do not provide hardware support for ray tracing. Their rasterization and compute performance appears to be on-par with good discrete cards, which is fantastic, but the base M1 is 30-40x slower than entry-level RTX cards when it comes to ray tracing throughput and the Pro/Max improvements are unlikely to bridge that gap. DirectX combined with ray tracing hardware has given us the tools that make it comparatively simple to support real-time ray tracing and Metal/Mac hardware has not yet done so. We will, of course, continue to evaluate whether or not we’re able to satisfy our performance requirements on new hardware as it becomes available (including the M1 Pro/Max). Regarding compiling Chief for native arm64, @ghitchens's suspicion is accurate. Chief leverages a large number of libraries that we need to be able to compile for arm64; Qt is one of these as they have pointed out, but it is far from the only one. Some apps have likely been able to flip a switch in XCode and be in good shape to run natively on Apple Silicon, but this is very much not the case for Chief. That said, it’s certainly something we are aware of and it is being actively evaluated. As far as the overall question in this post, Chief does (to the best of my knowledge) work on M1 and I expect that the experience will only improve as the hardware gets better, as our support for the hardware gets better, and from a graphics perspective as we are able to iterate on our Metal implementation that was only introduced this version.
  13. Thank you for bringing this to our attention, Dan. We will look into it.
  14. This is not correct. The version of OpenGL that your computer is capable of running is dictated entirely by your video card and the drivers for that video card.
  15. To clarify, Chief does not take advantage of workstation (Quadro) graphics cards. It is likely that you will see significantly worse performance in Chief going from a GTX 980 to an older (or even a modern) Quadro card. Also note that your graphics card has absolutely no impact on ray tracing in Chief. Upgrades to your graphics card will only effect render views.
  16. What Plugable 3.0 station are you referring to? Is it something like this? http://plugable.com/products/ud-3900/ These kinds of docking stations that utilize USB graphics do not generally support high performance graphical applications very well (as is noted in the Gaming section in the link). The core of our rendering technology has changed significantly in X9, which is likely why you see a difference in behavior from X8.
  17. This is a bug only in how Chief displays the available amount of video memory. It will not impact how much video memory is actually utilized by Chief.
  18. Roy, I believe that this is a bug we identified after the public beta was released. We are looking into it for the official release.
  19. We are aware of issues effecting a small subset of Mac hardware. Can you tell me what graphics card your Mac has? You can find this information in Preferences -> Render -> Video Card Status.
  20. Chief is not optimized in any way for workstation cards and we don't put any additional emphasis on developing or testing for them relative to what we do for gaming cards. Generally speaking the hardware between workstation cards and gaming cards is very similar, but the drivers are optimized for very specific operations and often times explicitly for certain applications. Chief stands to gain very little, if anything, from these optimizations. The raw throughput that a gaming card is capable of is likely to be superior in the context of Chief. Both gaming cards and workstation cards implement OpenGL well. There may be differences in the expected lifetime of the GPU and the support provided by the vendor for the workstation level cards, but I have little knowledge in this area.
  21. We have addressed several bugs related to printing live layout views for the next update. Thank you to those who have been patient and have reported issues.
  22. Alan, We have been working to improve the quality of line weights on printed live views and expect it to be better with the next update.
  23. We have reproduced the graphics related crash in house and believe we have it fixed. The fixed issue affects graphics cards that support a maximum OpenGL version of 3.0 or 3.1. This includes Intel HD Graphics 2000/3000 (with any driver), Intel HD Graphics 4000 (with outdated drivers), as well as a variety of older ATI/NVIDIA cards.
  24. I would guess that you have an extremely high density pattern somewhere, in which case your GPU is not actually the bottleneck. Is taking a standard overview significantly faster? It's very unlikely that upgrading your GPU would improve performance here.
  25. You shouldn't use hardware edge smoothing and software edge smoothing at the same time. Software edge smoothing is capable of achieving better results than hardware edge smoothing but will dramatically decrease performance as it basically involves rendering each frame several times (between 2 and 15 times, depending on the level the preference is at). You can therefore expect performance to drop by a factor of roughly between 2 and 15 (again depending on the preference). Unless you're using a very low end GPU that either doesn't support mid range MSAA or supports it but suffers dramatic performance loss I would recommend using only hardware edge smoothing for previews and bumping up software edge smoothing for final views.