TheKitchenAbode

Members
  • Posts

    3070
  • Joined

  • Last visited

Everything posted by TheKitchenAbode

  1. Thanks Chris, that's what happens on my main system and my Spectra 360.
  2. I understand that aspect. But the purpose of my post is performance related. It's just that I'm seeing things that seem to contradict common belief, including mine. In particular what appears to be the underutilization of my GPU. I posted a small test plan hoping others would provide some feedback but so far none has been received. I have no problem if it is something unique to my setup, just would be nice to know.
  3. Not 100% sure that the PDF is actually being imported into to plan. I just imported a 4 page pdf, saved my plan, closed it and then went to the original referenced PDF and changed it's name. When I re-opened the plan CA said referenced file missing and nothing showed up in my plan. Regardless, the concern is how rapidly CA's performance can degrade. I can do a similar PDF procedure in Microsoft OneNote and everything is fast and smooth, pan, zoom, scroll, resize, all actions are instant. Edit - Ignore first statement, forgot to click Save in plan.
  4. From the testing everything you add increases complexity, some items such as high resolution textures contribute to complexity greater than other items. I'm certainly not saying that better hardware will not result in improved performance, just attempting to gain a sense as to the degree of improvement and where that improvement might be realized. Just a simple example, my main systems has a GTX1060 6GB, my Spectra has an Intel HD 620 integrated graphics chip. Technically the GTX 1060 is 10 times more powerful than the HD 620, however when I run my test models the difference is very little, also as my Spetra's CPU is half of what my main system is it's possible that some of the slightly slower video performance is related to it. On the other hand when I did my last main system upgrade the primary difference between the build was the CPU, same memory(8GB), same type of hard drive(std HD), minor graphics card difference, however in this case the performance boost was very noticeable. Possibly the below post was missed, would be interesting if other users would run this and report their experience. It's not a trick, just an example of how other things can impact on performance.
  5. I assume you are referring to the ability to select pages within a PDF doc prior to importing. if so, yes this is a convenience. However, from what I can see is that once the pages are loaded they are essentially dumb from that point on. Not really any different then importing a very high resolution pic which can also result in simialr lag. Looks to me lke the issue relates to what CA has to do to scale these as one zooms and pans. For example, text components are not fonts anymore so traditional efficient font scaling algorithims can't be used so very single pixel must be calculated and extrapolated which could be very time consuming.
  6. Here is a very simple plan. Just 1 p-solid, 12 surfaces, default concrete. File size is only 2,845 KB Let me know if you can work fluidly with this in plan view. While doing this have task manager open, tab-performance and observe your systems activity. P Solid Stress.plan
  7. Rene - Just did the same thing on my desk top, disabled the 1060 and switched to the hd530 chip. I get the same message. If I check in CA preferences video card status it shows nothing, it's as if it does not recognize the chip at all. I have X11 running on my Spectra, it has an intel 620 chip. If I open up preferences it is shown. The library works ok. What I have noticed is that if I go to windows control panel on my Spectra there is an Intel Graphics Settings option. But if I go to the Control Panel on my desktop there is no Intel Graphics Settings option, just my Nvida one. Somethings not right.
  8. It's an interesting circumstance. The fact that notification was from CA it might be worth sending it in to tech support.
  9. Unfortunately I can't control whether or not one fully reads what's posted. I also have no control over whether the reader has sufficient knowledge to interpret what is said within it's proper context. This however should not detract nor undermine the findings. I do not believe that there is anything in the post that was stated as definitive, in fact I made an effort to use terms such as indicated, appeared to be, seems to be, etc. I'm also fully open to the experiences of others and critical peer review. The purpose here was to help identify potential bottlenecks that could impede on ones ability to work fluidly in CA as model complexity increased. For those inclined, understanding this could prove to be of use as they grapple with how to resolve an issue in the most efficient manner.
  10. Seems strange as I have a 360 Spectra that only has an Intel 530 integrated chip and have never had the issue you are experiencing. What I do know is that CA does not like it if the graphics is changed while CA is active. From what I understand is CA only sets up for the graphics upon initial program loading so if you change the graphics while CA is active it will still assume the other graphics card is running. Maybe this is the issue as with the Surface book the moment you un-dock it switches over to the integrated chip automatically as the 1060 is in the main base.
  11. When you say unlocked from it's 1060 card do you mean you tried running CA on the integrated chip?
  12. My Layout investigation was certainly limited so any conclusions should just be taken in the context of what was explored. I have not done a lot of PDF investigation but from my experience with them they can definitely be very problematic. I have no way to know what CA does when it imports a PDF as far as how it interprets the PDF content for display in CA. PDF's are comprised of Post Script, Vector, Raster Images and other stuff, not sure if say the Post Script is the root of the problem or any of the other ones. The only thing is I'm not sure why CA does not just automatically convert the PDF to a jpeg, especially when the imported PDF ends up being just a dumb object.
  13. Absolutely, any thing that involves disk access will benefit from a higher performance drive. I did mention that my drive is an NMVe M.2 and the fact that library and plan opening would be affected by drive performance. At this stage of testing, in respect to drives, I was looking to see what if any role the drive was playing when processing primary CA manipulations such as changing object attributes, 3D building, auto building roofs and a few more. Under these circumstances there was no evidence of any significant drive activity. Agree, there are many other functions and activities that would be worthy of evaluation. Only so much available time to study this. I did test out a number of camera view types, Standard, Vector, Glass and Technical Illustration. There were differences, especially when the camera view is initially opened but this was mostly related to CPU processing of the model before it could be sent to the GPU. I believe your comment concerning the number of faces, assuming that means or includes surfaces, supports my observations which indicated that surface count alone is not the sole determining factor to CA performance.
  14. Just my opinion, but based upon the results way to much emphasis is being placed on the video card at least when it concerns CA. PBR'ing is a bit different and it does rely more heavily on the video card but even so is see no indication that ones needs anything other than a mid grade card. Mine is a GTX 1060 and it seems to be able to handle some very complex scenes without any real issues. Now if one is just PBR'ing all day and working all the time in a PBR scene then that would likely warrant a better video card, however PBR'ing is not 100% video card dependent, there is a lot of CPU processing needed before the scene is sent to the video card for final processing. So again, just doing the video card upgrade route will only take you a certain distance.
  15. Yes 3-5 seconds is frustrating, especially when it occurs on every change. In general, I consider 1 second to be the threshold. Based on say 4 seconds, to get that down to 1 second would theoretically require your new processor to be 4 times faster than the one you are replacing. We all know that that's not likely feasible. When it comes to the GPU it seems that we tend to view CA in the same category as a video game and therefore based upon a cards video game performance we extrapolate this to CA's performance. It seems that this is very misleading and as such we don't derive the benefit we were expecting.
  16. I have added to the main article the following. Hope this provides a bit more clarification concerning surfaces. Edit 4/29/2019 - To add some further clarification regarding surface count. What appears to be most impactful about surfaces is the type of object that they are associated with. Everything in the 3D model has a surface, walls, ceilings, framing, cabinets, furniture, etc. however, dependent upon the type of object surface processing varies considerably. Indications are is that surfaces associated with structural elements such as walls take considerably longer to process than say the surfaces on a symbol.
  17. It's not ALDO's fault, ALDO, besides other important benefits, just provides a means to control how little or much CA has to process when building the model. If one was to point a finger at this it would be the 3D Rebuild, this appears to be were most of the heavy lifting is done. What's unfortunate is that during this process, which is likely due to the mix of threading operations, the CPU is underutilized. From what I observed the CPU utilization was on average less than 20% for approximately 80% of the overall processing time. The reality may be that these processes can't be coded to be more efficient, it is what it is, at least we have ALDO to help out when lag due to plan complexity starts to creep in.
  18. Though not tested thoroughly, texture complexity did increase processing time, this includes bump maps and other texture effects such as reflections, lighting, shadow, etc. they all add to the number of computations needed, there's a cumulative effect as would be expected. Concerning standard versus vector view camera performance it appeared to depend upon whether or not things such as textures and patterns were turned on. The most important point I believe is that all of these processes appeared to be highly CPU dependent not GPU. This surprised me as I was expecting to see the GPU play a much more important role. Concerning what tends to bog down the software appears to be a bit more complicated and I don't wish the results to undermine the impact that textures have as they do have a significant impact on performance. The stress tests where not designed or intended to establish the point at which CA transitioned from being efficient to use to cumbersome it was designed in an attempt to identify specific processes that were most impactful on performance and the role played by each hardware component.
  19. I have completed a significant portion of my stress testing of CA, here are some of the most significant observations. Context It is not currently feasible to test every available function or every possible plan configuration, please keep this in mind when evaluating the comments. It is also not possible to definitively determine the exact CA process being performed at a given time, when comments attempt to do so it is in the context that it appears to be working on something related to that. This is not intended to be the definitive word on CA performance, just what was tested, what was observed and an attempt to provide a reasonable explanation. General Test Conditions Two different plans where created and used during the evaluation. One plan had 1,200 basic houses, each had 4 walls, 4 windows and 1 door. The other plan had only a symbol that was replicated 5,000 times. This was done as some earlier testing indicated that CA processes basic building components differently than symbol type objects that are essentially surface dominant. The house plan file size was 41,140 kb and the symbol plan file size is 14,074 kb. The house plan contained approx. 450,000 surfaces and the symbol plan contained approx. 19,000,000 surfaces. The plans were purposely designed to slow down CA so to reveal actions that would under much smaller plans go unnoticed. General Testing Each plan was run through a battery of manipulations such as plan, elevation and 3D camera generation, panning, zooming and rotating. Within these views a range of tools/functions where tested such as changing an objects size, opening object DBX dialogs, replacing objects, undo/redo's, auto building roofs and foundations, etc. A layout file was also created with live plan, elevation and camera views linked to the house plan, it was evaluated for live view updating and plan access times via the layout views. Times for these actions to complete were logged and CPU, GPU, Disk & Memory usages were also noted. All tests were repeated numerous times to ensure result consistency. X11 was used for all testing. Summarized Results At this time, I will just summarize the most significant findings. Stability X11 was able to complete all actions and never once did it crash. This includes having to auto-build roof planes for 1,200 homes, took almost 45 minutes but it did do it. Memory Usage At no time did CA exceed my 16GB of system memory. GPU Dedicated Memory At no time did CA exceed my 8GB of dedicated GPU memory, regardless of the number of tabbed windows. In fact, dedicated GPU memory usage was extremely low, typically under 2GB. The only exception to this is when PBR'ing, complex scenes with extensive lighting could consume up to 4GB per active view. GPU Performance At no time did my GPU usage max out @ 100%. In fact, in just about every action GPU usage was extremely low, in most cases under 10% utilization. Note, very high-resolution monitors will likely be a more demanding. Disk Performance I found no evidence that disk access times played any significant role in CA's performance other than initial plan loading and library access. Keep in mind that my Drive is and NVMe M.2 so it’s fast to begin with. CPU Performance All indicators point to CA being highly dependent upon the CPU. It became highly evident that what one might consider to be a GPU process was in fact being primarily processed by the CPU. CPU Core Count CA's processing involves a mix of single, lightly threaded and some highly threaded operations. This mixed bag can make it somewhat difficult to definitively define the full impact of having very high core count CPU's. There were however strong indicators that some specific operations would benefit. Texture handling which is also related to surface count appeared to be highly threaded and would derive a benefit from higher core CPU's. While on the other hand operations related to model manipulation such as changing roofs, walls, moving objects, copy/paste are mostly single of only lightly threaded ad as such a high core count CPU is not likely going to provide much benefit. CPU Frequency In general, the faster the CPU frequency the faster the CPU processes operations. It is however worth keeping in mind that with today’s processors the actual CPU throughput is not just determined solely on its base frequency, core count is also a factor which comes into play based upon how threaded an operational process is. For single threaded processes base clock frequency is most important while for highly threaded operations core count becomes the more important factor. Obviously, the best is the highest base clock frequency and the most cores, providing you can afford it. As mentioned above CA is a mix of differing threading levels which appears to be primarily related to very specific CA functions. If you are Ray Tracing or working extensively with textures and high surface counts, then sacrificing some base clock frequency to obtain more cores is likely worth it. If your work is primarily structure building related, then base clock frequency is likely more important over an excessive number of cores. What's Going on in CA Throughout the specific tests and manipulations, it became highly evident that there is one particular process that CA must do that overwhelmingly determines its performance. Just worth noting that there are some other issues/anomalies, but I will address those in future postings. Other than panning, zooming and rotating, CA must for display purposes build/update the 3D model for each change made, if it does not do this then you won't see the change. On low complexity models you will not notice this, however in high complexity models you may have noticed the pop up "BUILD 3D MODEL" being displayed. This is where CA spends most of its processing time and it is a CPU based operation. You have no direct control over its execution, in other words there is no single toggle on/off like we have in building walls, roofs, floors and framing. Also, though the 3D rebuild is for display purposes it is as mentioned a CPU computational process and would estimate that easily more than 90% of the time spent to process something is spent on this 3D rebuild. It is worth noting that a full rebuild is being done for every change. The degree of rebuilding appeared to vary in accordance to the type of item being changed. Major structure required a full rebuild, changes to structure related items such as doors and windows required a partial rebuild, they ran faster. From everything seen this 3d rebuild is the core source of lag introduction as model complexity increases. Before providing a method to overcome this it is worth some time to discuss model complexity. The most common way has been to quantify it in respect to the total number of surfaces, however the stress testing done does not support this to the extent that one may believe it to be. For example, the model with 19 million surfaces ran significantly faster in every test than the house model that contained only 400,000 surfaces. This suggest that surface count on its own does not fully answer the lag/slowness question. To gain further perspective on this, consideration must be given to how CA processes surfaces versus structure elements. Surface processing appears to be highly threaded while structure processing seems to be lightly threaded. Testing appears to indicate that during the 3D rebuild structure is done first and then surfaces next, in respect to total time, structure processing seems to account for about 90% and surface processing 10%. So ultimately the duration something is going to take is related to the mix of elements and of those elements how many are lightly threaded versus those that are heavily threaded. Edit 4/29/2019 - To add some further clarification regarding surface count. What appears to be most impactful about surfaces is the type of object that they are associated with. Everything in the 3D model has a surface, walls, ceilings, framing, cabinets, furniture, etc. however, dependent upon the type of object surface processing varies considerably. Indications are is that surfaces associated with structural elements such as walls take considerably longer to process than say the surfaces on a symbol. What can we do? There is one key function in CA that ultimately determines what must be taken into consideration for it to process the 3D rebuild. This is the "Active Layer Display Options" All elements assigned to a display layer that is checked is taken into account. Elements in unchecked display layers are essentially ignored. Therefore, through the Active Layer Display Option you can by turning off unnecessary layers impart on CA the ability to easily handle extremely complex plans. Undo/Redo No evidence was found that directly connects file size to undo/redo times. What was indicated is that when an undo/redo was performed CA like any other change had to perform a 3D Rebuild and it is this that predominantly dictates the time to undo/redo. Layout Performance The house plan view, 1 elevation and 4 differing camera views were set up in a layout file. All were live views but were set to update on demand. I could not find any lag that was specific to the layout when accessing any of the views through the layout. There was lag, but this appeared to relate back to the plan. Regardless of how a view is accessed it still must be built for display purposes. This would also be applicable when initiating a live update, the plans associated camera view needs to be generated for the layout view to be updated. This is being done undercover but still needs to be done. No other testing was done so there may be other Layout functions that might contribute to overall performance. Other Issues As mentioned, during testing there are indications that some processes appear to be occurring when one would not expect them to occur. These will be discussed in a separate post.
  20. Most likely a good habit to adopt, CA should really consider an auto update option and save us the time manually updating the libraries. Personally, though I like lots of materials, how many hardwoods do we need. I suspect between the core catalogs and the manufacturer catalogs we already have 200 or more choices. Just to complain a bit more many of them look almost identical anyways. As I have been doing more stress testing it is becoming very evident that they really should put those people to work on some other more important fixes and improvements.
  21. Select Library, top menu bar. There is an option to update the catalogs.
  22. We are all still searching through the reference manual looking for the answer.
  23. I believe in general that the forum should be viewed as an extension to and not a substitute for the official CA reference, help and knowledge base. Posters should also be respectful to the fact that forum members volunteer their time and are also willing to share their expertise and skills, they are potentially helping you make money all for free. Given this it's not really much to ask that the poster make some effort to try and find the answer to their question in the CA documentation first, provide some pertinent information such as the CA version they are using and some system specs and when necessary to post the problematic plan. Also, the addition of a "please" and "thankyou" would likely go a long way when soliciting for help.
  24. Yes, I have found that when initially drawing the 3D molding poly line it is critical to decide the best view to start with, plan or a specific elevation. What seems to work best is to draw the first line in the same view as most of the other lines are going to be drawn/manipulated in. What I have also found interesting is that when trying to insert say a break to change direction is that it is usually best to use the same view type as that which was used when drawing the first line, you can then flip back to another view type to then manipulate it if need be.
  25. From my experience there is no simple way to do this but it can be done. This one was done almost entirely in Chief with few corrections in Photoshop. I took a Google street view and applied it to a very large P-solid on the other side of the road. By changing the size of the P-solid and the texture(pic) positioning I could get things looking reasonable. This one was done entirely in Photoshop. Rendered the model in Chief, placed it Photoshop and then on a separate layer I placed the background pic. It's tricky to get things to line up properly as there is a limit to how much one can stretch and warp a pic before. To get it right would take a real Photoshop pro.