Popular Post TheKitchenAbode Posted April 28, 2019 Popular Post Share Posted April 28, 2019 I have completed a significant portion of my stress testing of CA, here are some of the most significant observations. Context It is not currently feasible to test every available function or every possible plan configuration, please keep this in mind when evaluating the comments. It is also not possible to definitively determine the exact CA process being performed at a given time, when comments attempt to do so it is in the context that it appears to be working on something related to that. This is not intended to be the definitive word on CA performance, just what was tested, what was observed and an attempt to provide a reasonable explanation. General Test Conditions Two different plans where created and used during the evaluation. One plan had 1,200 basic houses, each had 4 walls, 4 windows and 1 door. The other plan had only a symbol that was replicated 5,000 times. This was done as some earlier testing indicated that CA processes basic building components differently than symbol type objects that are essentially surface dominant. The house plan file size was 41,140 kb and the symbol plan file size is 14,074 kb. The house plan contained approx. 450,000 surfaces and the symbol plan contained approx. 19,000,000 surfaces. The plans were purposely designed to slow down CA so to reveal actions that would under much smaller plans go unnoticed. General Testing Each plan was run through a battery of manipulations such as plan, elevation and 3D camera generation, panning, zooming and rotating. Within these views a range of tools/functions where tested such as changing an objects size, opening object DBX dialogs, replacing objects, undo/redo's, auto building roofs and foundations, etc. A layout file was also created with live plan, elevation and camera views linked to the house plan, it was evaluated for live view updating and plan access times via the layout views. Times for these actions to complete were logged and CPU, GPU, Disk & Memory usages were also noted. All tests were repeated numerous times to ensure result consistency. X11 was used for all testing. Summarized Results At this time, I will just summarize the most significant findings. Stability X11 was able to complete all actions and never once did it crash. This includes having to auto-build roof planes for 1,200 homes, took almost 45 minutes but it did do it. Memory Usage At no time did CA exceed my 16GB of system memory. GPU Dedicated Memory At no time did CA exceed my 8GB of dedicated GPU memory, regardless of the number of tabbed windows. In fact, dedicated GPU memory usage was extremely low, typically under 2GB. The only exception to this is when PBR'ing, complex scenes with extensive lighting could consume up to 4GB per active view. GPU Performance At no time did my GPU usage max out @ 100%. In fact, in just about every action GPU usage was extremely low, in most cases under 10% utilization. Note, very high-resolution monitors will likely be a more demanding. Disk Performance I found no evidence that disk access times played any significant role in CA's performance other than initial plan loading and library access. Keep in mind that my Drive is and NVMe M.2 so it’s fast to begin with. CPU Performance All indicators point to CA being highly dependent upon the CPU. It became highly evident that what one might consider to be a GPU process was in fact being primarily processed by the CPU. CPU Core Count CA's processing involves a mix of single, lightly threaded and some highly threaded operations. This mixed bag can make it somewhat difficult to definitively define the full impact of having very high core count CPU's. There were however strong indicators that some specific operations would benefit. Texture handling which is also related to surface count appeared to be highly threaded and would derive a benefit from higher core CPU's. While on the other hand operations related to model manipulation such as changing roofs, walls, moving objects, copy/paste are mostly single of only lightly threaded ad as such a high core count CPU is not likely going to provide much benefit. CPU Frequency In general, the faster the CPU frequency the faster the CPU processes operations. It is however worth keeping in mind that with today’s processors the actual CPU throughput is not just determined solely on its base frequency, core count is also a factor which comes into play based upon how threaded an operational process is. For single threaded processes base clock frequency is most important while for highly threaded operations core count becomes the more important factor. Obviously, the best is the highest base clock frequency and the most cores, providing you can afford it. As mentioned above CA is a mix of differing threading levels which appears to be primarily related to very specific CA functions. If you are Ray Tracing or working extensively with textures and high surface counts, then sacrificing some base clock frequency to obtain more cores is likely worth it. If your work is primarily structure building related, then base clock frequency is likely more important over an excessive number of cores. What's Going on in CA Throughout the specific tests and manipulations, it became highly evident that there is one particular process that CA must do that overwhelmingly determines its performance. Just worth noting that there are some other issues/anomalies, but I will address those in future postings. Other than panning, zooming and rotating, CA must for display purposes build/update the 3D model for each change made, if it does not do this then you won't see the change. On low complexity models you will not notice this, however in high complexity models you may have noticed the pop up "BUILD 3D MODEL" being displayed. This is where CA spends most of its processing time and it is a CPU based operation. You have no direct control over its execution, in other words there is no single toggle on/off like we have in building walls, roofs, floors and framing. Also, though the 3D rebuild is for display purposes it is as mentioned a CPU computational process and would estimate that easily more than 90% of the time spent to process something is spent on this 3D rebuild. It is worth noting that a full rebuild is being done for every change. The degree of rebuilding appeared to vary in accordance to the type of item being changed. Major structure required a full rebuild, changes to structure related items such as doors and windows required a partial rebuild, they ran faster. From everything seen this 3d rebuild is the core source of lag introduction as model complexity increases. Before providing a method to overcome this it is worth some time to discuss model complexity. The most common way has been to quantify it in respect to the total number of surfaces, however the stress testing done does not support this to the extent that one may believe it to be. For example, the model with 19 million surfaces ran significantly faster in every test than the house model that contained only 400,000 surfaces. This suggest that surface count on its own does not fully answer the lag/slowness question. To gain further perspective on this, consideration must be given to how CA processes surfaces versus structure elements. Surface processing appears to be highly threaded while structure processing seems to be lightly threaded. Testing appears to indicate that during the 3D rebuild structure is done first and then surfaces next, in respect to total time, structure processing seems to account for about 90% and surface processing 10%. So ultimately the duration something is going to take is related to the mix of elements and of those elements how many are lightly threaded versus those that are heavily threaded. Edit 4/29/2019 - To add some further clarification regarding surface count. What appears to be most impactful about surfaces is the type of object that they are associated with. Everything in the 3D model has a surface, walls, ceilings, framing, cabinets, furniture, etc. however, dependent upon the type of object surface processing varies considerably. Indications are is that surfaces associated with structural elements such as walls take considerably longer to process than say the surfaces on a symbol. What can we do? There is one key function in CA that ultimately determines what must be taken into consideration for it to process the 3D rebuild. This is the "Active Layer Display Options" All elements assigned to a display layer that is checked is taken into account. Elements in unchecked display layers are essentially ignored. Therefore, through the Active Layer Display Option you can by turning off unnecessary layers impart on CA the ability to easily handle extremely complex plans. Undo/Redo No evidence was found that directly connects file size to undo/redo times. What was indicated is that when an undo/redo was performed CA like any other change had to perform a 3D Rebuild and it is this that predominantly dictates the time to undo/redo. Layout Performance The house plan view, 1 elevation and 4 differing camera views were set up in a layout file. All were live views but were set to update on demand. I could not find any lag that was specific to the layout when accessing any of the views through the layout. There was lag, but this appeared to relate back to the plan. Regardless of how a view is accessed it still must be built for display purposes. This would also be applicable when initiating a live update, the plans associated camera view needs to be generated for the layout view to be updated. This is being done undercover but still needs to be done. No other testing was done so there may be other Layout functions that might contribute to overall performance. Other Issues As mentioned, during testing there are indications that some processes appear to be occurring when one would not expect them to occur. These will be discussed in a separate post. 3 7 Link to comment Share on other sites More sharing options...
Michael_Gia Posted April 28, 2019 Share Posted April 28, 2019 I’m surprised that textures are not as taxing as I thought they would be. This is probably why the Standard View is even snappier than Vector View. Puzzling since most other softwares of this type tend to bog down when you add bitmap textures to the model. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 2 hours ago, Michael_Gia said: I’m surprised that textures are not as taxing as I thought they would be. This is probably why the Standard View is even snappier than Vector View. Puzzling since most other softwares of this type tend to bog down when you add bitmap textures to the model. Though not tested thoroughly, texture complexity did increase processing time, this includes bump maps and other texture effects such as reflections, lighting, shadow, etc. they all add to the number of computations needed, there's a cumulative effect as would be expected. Concerning standard versus vector view camera performance it appeared to depend upon whether or not things such as textures and patterns were turned on. The most important point I believe is that all of these processes appeared to be highly CPU dependent not GPU. This surprised me as I was expecting to see the GPU play a much more important role. Concerning what tends to bog down the software appears to be a bit more complicated and I don't wish the results to undermine the impact that textures have as they do have a significant impact on performance. The stress tests where not designed or intended to establish the point at which CA transitioned from being efficient to use to cumbersome it was designed in an attempt to identify specific processes that were most impactful on performance and the role played by each hardware component. Link to comment Share on other sites More sharing options...
Rich_Winsor Posted April 29, 2019 Share Posted April 29, 2019 I always knew it was ALDO's fault. Lots of good info Graham. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 4 hours ago, Rich_Winsor said: always knew it was ALDO's fault. It's not ALDO's fault, ALDO, besides other important benefits, just provides a means to control how little or much CA has to process when building the model. If one was to point a finger at this it would be the 3D Rebuild, this appears to be were most of the heavy lifting is done. What's unfortunate is that during this process, which is likely due to the mix of threading operations, the CPU is underutilized. From what I observed the CPU utilization was on average less than 20% for approximately 80% of the overall processing time. The reality may be that these processes can't be coded to be more efficient, it is what it is, at least we have ALDO to help out when lag due to plan complexity starts to creep in. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 I have added to the main article the following. Hope this provides a bit more clarification concerning surfaces. Edit 4/29/2019 - To add some further clarification regarding surface count. What appears to be most impactful about surfaces is the type of object that they are associated with. Everything in the 3D model has a surface, walls, ceilings, framing, cabinets, furniture, etc. however, dependent upon the type of object surface processing varies considerably. Indications are is that surfaces associated with structural elements such as walls take considerably longer to process than say the surfaces on a symbol. Link to comment Share on other sites More sharing options...
HumbleChief Posted April 29, 2019 Share Posted April 29, 2019 Very nice write up Graham. On some level I knew that the 3D rebuild was the slow down culprit but only anecdotally as my ability to perform any kind of meaningful testing is pretty much zero. Surprised at how little lifting the GPU is doing and I think it is a current trend to off load some CPU operations to the GPU since they are so capable these days and I wonder of that's a programming train that Chief is currently not on board with? My assumption could also be incorrect. I have a model that I'm no longer working on that took 3 - 5 seconds to rebuild each change to a roof plane. I upgraded the GPU and both CPU's and saw absolutely no change in performance. I guess you see what I mean by anecdotally, and not sure there's a point here but it was then that I decided not throw money at Chief's slow performance, at least on my machine. Still a bit surprised at the performance via CPU versus GPU but very valuable information and appreciate your post. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 5 minutes ago, HumbleChief said: took 3 - 5 seconds to rebuild each change to a roof plane. I upgraded the GPU and both CPU's and saw absolutely no change in performance. Yes 3-5 seconds is frustrating, especially when it occurs on every change. In general, I consider 1 second to be the threshold. Based on say 4 seconds, to get that down to 1 second would theoretically require your new processor to be 4 times faster than the one you are replacing. We all know that that's not likely feasible. When it comes to the GPU it seems that we tend to view CA in the same category as a video game and therefore based upon a cards video game performance we extrapolate this to CA's performance. It seems that this is very misleading and as such we don't derive the benefit we were expecting. Link to comment Share on other sites More sharing options...
HumbleChief Posted April 29, 2019 Share Posted April 29, 2019 3 minutes ago, TheKitchenAbode said: When it comes to the GPU it seems that we tend to view CA in the same category as a video game and therefore based upon a cards video game performance we extrapolate this to CA's performance. It seems that this is very misleading and as such we don't derive the benefit we were expecting. ...and spend LOTS of money on the fastest video card gaining very little to no performance? Which is your point above I believe, but how do we NOT buy the fastest video card we can afford? At what point does the speed of the card simply not matter to Chief's overall performance? Is a 1080 good enough and any upgrade from that simply will have no performance effect? 1070? 1060? How can we possibly know, as our models begins to slow down, which it will as it gets larger, if we have enough video card? Better upgrade. Whoops no performance gain, throw more money, no performance gain. Sigh... The GPU data you posted might be the most important piece of the puzzle but still very confusing when it come time to spend more video card money on performance that just may not materialize. Link to comment Share on other sites More sharing options...
Chrisb222 Posted April 29, 2019 Share Posted April 29, 2019 12 minutes ago, TheKitchenAbode said: Yes 3-5 seconds is frustrating, especially when it occurs on every change. In general, I consider 1 second to be the threshold. Based on say 4 seconds, to get that down to 1 second would theoretically require your new processor to be 4 times faster than the one you are replacing. We all know that that's not likely feasible. When it comes to the GPU it seems that we tend to view CA in the same category as a video game and therefore based upon a cards video game performance we extrapolate this to CA's performance. It seems that this is very misleading and as such we don't derive the benefit we were expecting. In response to the part I bolded, an alternative would be for CA to utilize more of the resources available, at least according to the findings of this study. In regards to anecdotal experiences, I can attest that onboard CPU graphics are not capable of producing PBRs. At all. I thought about upgrading my system just to produce PBRs but I actually prefer ray tracing and have adapted to the new RT requirements fairly well. After seeing this study I'm glad I didn't because my puny Mac Mini 2014 with Iris CPU graphics actually runs X11 at a very pleasant speed, and drives two 24" monitors perfectly... as long as I don't try to PBR. Seems that for my purposes a more powerful system would be a waste of money. Link to comment Share on other sites More sharing options...
Alaskan_Son Posted April 29, 2019 Share Posted April 29, 2019 I don't personally have the time or inclination to test these things myself, but just a few notes based on my own personal experience. 16 hours ago, TheKitchenAbode said: I found no evidence that disk access times played any significant role in CA's performance other than initial plan loading and library access. I would suggest not downplaying the library access issue as some of us spend a lot of time sorting though and just generally accessing the libraries. In addition, a lot of people's workflow's (mine included) involve A LOT of multitasking, opening and closing lots of plans, opening and closing layouts, etc. I'm constantly opening and closing plan and layout files, searching the library, copying files from one location to another, etc. I can tell you that upgrading my disk to an SSD was the single most noticeable change I've ever made. No question about it. Also, I'm curious about a few operations that your test didn't seem to make much mention of...the things that I personally find to be amongst the slowest and most taxing operations in Chief... Importing, displaying, and editing heavy amounts of line work (CAD) Creating CAD Details From View Generating complex terrains Using imported PDF and/or image files in plan and/or elevation views Using boolean operations on large or complex groups of solids NOT using Live Views but rather using Plot Lines. I personally basically only ever use Plot Lines and typically with shadows. I'm also a little curious how much time you spent in some of the "other" rendering techniques. I personally spend most of my time in Vector View and also make a fair amount of use of some of the other rendering techniques as well. The Watercolor with Line Drawing mode is one my favorites and it also tends to be the slowest. Also, just a side note, but for whatever its worth, its really not too uncommon for me to deal with house plans that have double or even triple the number of faces you had in your "house" plan. Not sure if the effectual differences would be linear or compound. 1 Link to comment Share on other sites More sharing options...
Chrisb222 Posted April 29, 2019 Share Posted April 29, 2019 4 minutes ago, HumbleChief said: At what point does the speed of the card simply not matter to Chief's overall performance? For myself I'm very happy with my humble little $700 Mac Mini with onboard CPU graphics. PBR would be the only justification for anything more powerful, unless you need muscle for huge projects. I only design single family homes and almost never experience any lag. 1 Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 4 minutes ago, HumbleChief said: The GPU data you posted might be the most important piece of the puzzle but still very confusing when it come time to spend more video card money on performance that just may not materialize. Just my opinion, but based upon the results way to much emphasis is being placed on the video card at least when it concerns CA. PBR'ing is a bit different and it does rely more heavily on the video card but even so is see no indication that ones needs anything other than a mid grade card. Mine is a GTX 1060 and it seems to be able to handle some very complex scenes without any real issues. Now if one is just PBR'ing all day and working all the time in a PBR scene then that would likely warrant a better video card, however PBR'ing is not 100% video card dependent, there is a lot of CPU processing needed before the scene is sent to the video card for final processing. So again, just doing the video card upgrade route will only take you a certain distance. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 11 minutes ago, Alaskan_Son said: I can tell you that upgrading my disk to an SSD was the single most noticeable change I've ever made. No question about it. Absolutely, any thing that involves disk access will benefit from a higher performance drive. I did mention that my drive is an NMVe M.2 and the fact that library and plan opening would be affected by drive performance. At this stage of testing, in respect to drives, I was looking to see what if any role the drive was playing when processing primary CA manipulations such as changing object attributes, 3D building, auto building roofs and a few more. Under these circumstances there was no evidence of any significant drive activity. Agree, there are many other functions and activities that would be worthy of evaluation. Only so much available time to study this. I did test out a number of camera view types, Standard, Vector, Glass and Technical Illustration. There were differences, especially when the camera view is initially opened but this was mostly related to CPU processing of the model before it could be sent to the GPU. I believe your comment concerning the number of faces, assuming that means or includes surfaces, supports my observations which indicated that surface count alone is not the sole determining factor to CA performance. Link to comment Share on other sites More sharing options...
Renerabbitt Posted April 29, 2019 Share Posted April 29, 2019 Graham, first off, much praise and respect +1...You have, on several occasions, spent tens of hours of your own time for the benefit of the community. Thank You! I've done a lot of testing in CA, so I have a pretty good indicator of what's going on behind my own personal workflow in CA. Something I learned from reading this..turning off layers, interesting. Again, thanks. I would also guess that smoothing angle could effect builds..just a hunch, but for someone that never goes into 3d, I wonder if turning all symbol smoothing angles to 0 would have any impact on rebuild. A few things I will add as they are the industry standard and well known...Texture resolution absolutely has a direct correlation with graphics cards. I can't use 4k graphics in CA like I can in other programs that have optimized texture loading or texture baking. I had to convert all of my textures down to 1080 for CA to have the snappy system that I need. CA loaded textures, typically, are 512. A slow hard drive can be a bottleneck for a blazing fast machine..so can a slow graphics card or cheap FSB /bridge etc(mobo) CPU cache helps CA +1 to @Alaskan_Son I said similar things in Graham and I's previous thread-go-around about system setups. CA is constantly writing, it's easy for CA to write a Gig in 5 minutes, and every action performed in CA writes to your OS drive a little preference/log file. Library access is a huge part of all this as well as Library preview can load up your graphics card. Last thing I'll note is in conflict with what Graham stated as a general note, though he didn't specifically test for it. I can, with certainty, say that Layout is slower than plan, dependent on what is being sent to layout. I have a ,plan open right now that has a PDF imported to the page. That Plan view has been sent to layout and it is without a doubt slower to navigate/pan/zoom than the associated plan view. 1 Link to comment Share on other sites More sharing options...
Alaskan_Son Posted April 29, 2019 Share Posted April 29, 2019 1 hour ago, TheKitchenAbode said: So again, just doing the video card upgrade route will only take you a certain distance. True. If anyone is using this thread to help make a decision as to where to invest there money though, I would still definitely advise using Chief’s published guidelines (they know better than you or I what the average user is doing with the software as well as what their development plans are) AND considering not the average use case, but the worst case...It may be that the video card almost never sees any use, but if you chintz on it and then decide to upgrade to a couple big 4K monitors and/or decide to spend an hour PBRing that 100KB model with 2,000,000 faces then you’re gonna be pretty disappointed when it either doesn’t work at all or worse...Chief crashes on you. In my experience, even the most basic, inexperienced, DIY user can easily push the limits...I’d say they’re actually the user most likely to create an overly complex model loaded with symbols and then spend a bunch of time inadvertently pushing the limits of the video card toying around with the various rendering techniques. Anyway, what you’ve posted is all good info...I’m just advising people take it all with a grain of salt. The average forum user can be pretty quickly and easily influenced in my experience and super commonly doesn’t take the time to fully read and comprehend all that’s being shared and communicated. Just wouldn’t want them skimming through and recklessly deciding the video card doesn’t really matter. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 2 minutes ago, Renerabbitt said: Last thing I'll note is in conflict with what Graham stated as a general note, though he didn't specifically test for it. I can, with certainty, say that Layout is slower than plan, dependent on what is being sent to layout. I have a ,plan open right now that has a PDF imported to the page. That Plan view has been sent to layout and it is without a doubt slower to navigate/pan/zoom than the associated plan view. My Layout investigation was certainly limited so any conclusions should just be taken in the context of what was explored. I have not done a lot of PDF investigation but from my experience with them they can definitely be very problematic. I have no way to know what CA does when it imports a PDF as far as how it interprets the PDF content for display in CA. PDF's are comprised of Post Script, Vector, Raster Images and other stuff, not sure if say the Post Script is the root of the problem or any of the other ones. The only thing is I'm not sure why CA does not just automatically convert the PDF to a jpeg, especially when the imported PDF ends up being just a dumb object. Link to comment Share on other sites More sharing options...
Renerabbitt Posted April 29, 2019 Share Posted April 29, 2019 3 minutes ago, Alaskan_Son said: as well as what their development plans are THIS! 1 minute ago, TheKitchenAbode said: The only thing is I'm not sure why CA does not just automatically convert the PDF to a jpeg, especially when the imported PDF ends up being just a dumb object. and THISSSSSS!!!!! haha, The surface book 2 I tried out prompted a built in warning from CA that the graphics card wasn't sufficient for the library browser when it was undocked from its 1060 card. I had to set library preview from vector to standard in order for the library to be accessible with preview Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 6 minutes ago, Renerabbitt said: The surface book 2 I tried out prompted a built in warning from CA that the graphics card wasn't sufficient for the library browser when it was undocked from its 1060 card. I had to set library preview from vector to standard in order for the library to be accessible with preview When you say unlocked from it's 1060 card do you mean you tried running CA on the integrated chip? Link to comment Share on other sites More sharing options...
Renerabbitt Posted April 29, 2019 Share Posted April 29, 2019 1 minute ago, TheKitchenAbode said: When you say unlocked from it's 1060 card do you mean you tried running CA on the integrated chip? Yes, the surface book 2 base houses a 1060 card(probably setup this way so you can upgrade the keyboard base and graphics card without needing to buy a full system in the future) When the tablet portion is un-docked, the dedicated chip is the only active gfx processor and the onboard ram comes into play as graphics memory. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 Just now, Renerabbitt said: Yes, the surface book 2 base houses a 1060 card(probably setup this way so you can upgrade the keyboard base and graphics card without needing to buy a full system in the future) When the tablet portion is un-docked, the dedicated chip is the only active gfx processor and the onboard ram comes into play as graphics memory. Seems strange as I have a 360 Spectra that only has an Intel 530 integrated chip and have never had the issue you are experiencing. What I do know is that CA does not like it if the graphics is changed while CA is active. From what I understand is CA only sets up for the graphics upon initial program loading so if you change the graphics while CA is active it will still assume the other graphics card is running. Maybe this is the issue as with the Surface book the moment you un-dock it switches over to the integrated chip automatically as the 1060 is in the main base. Link to comment Share on other sites More sharing options...
Renerabbitt Posted April 29, 2019 Share Posted April 29, 2019 8 minutes ago, TheKitchenAbode said: the integrated chip automatically as the 1060 is in the main base The Surface book will not allow you to undock while CA is running, it prompts you to close the program..it most likely has more to do with my library..would be my guess..this is getting off topic though, it does point to hardware related task however Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 39 minutes ago, Alaskan_Son said: Anyway, what you’ve posted is all good info...I’m just advising people take it all with a grain of salt. The average forum user can be pretty quickly and easily influenced in my experience and super commonly doesn’t take the time to fully read and comprehend all that’s being shared and communicated. Just wouldn’t want them skimming through and recklessly deciding the video card doesn’t really matter. Unfortunately I can't control whether or not one fully reads what's posted. I also have no control over whether the reader has sufficient knowledge to interpret what is said within it's proper context. This however should not detract nor undermine the findings. I do not believe that there is anything in the post that was stated as definitive, in fact I made an effort to use terms such as indicated, appeared to be, seems to be, etc. I'm also fully open to the experiences of others and critical peer review. The purpose here was to help identify potential bottlenecks that could impede on ones ability to work fluidly in CA as model complexity increased. For those inclined, understanding this could prove to be of use as they grapple with how to resolve an issue in the most efficient manner. Link to comment Share on other sites More sharing options...
TheKitchenAbode Posted April 29, 2019 Author Share Posted April 29, 2019 30 minutes ago, Renerabbitt said: The Surface book will not allow you to undock while CA is running, it prompts you to close the program..it most likely has more to do with my library..would be my guess..this is getting off topic though, it does point to hardware related task however It's an interesting circumstance. The fact that notification was from CA it might be worth sending it in to tech support. 1 Link to comment Share on other sites More sharing options...
Alaskan_Son Posted April 29, 2019 Share Posted April 29, 2019 1 hour ago, TheKitchenAbode said: The only thing is I'm not sure why CA does not just automatically convert the PDF to a jpeg, especially when the imported PDF ends up being just a dumb object. Multi page capability for starters maybe? Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now