Why does CA get slower and slower the more I use it? [SOLVED]


StevenJ
 Share

Recommended Posts

I am not an architect, I am a Software Engineer.  I had to help someone work out why their chief architect (Version X9) was getting slower and slower the more they used it.  They were convinced it was the speed of their PC.  They have an 8 core AMD Phenom and 2 x 128GB SSD's a 500GB HDD and 16GB ram, the processor is a bit old but certainly not slow.

 

I have read a number of reports on here about similar problems, none of them with real solutions.

 

This is what happens.  

As you use CA your design file will grow, initially things seem fast but over time they get slower and slower as your design becomes more complex.  The end result for the one i was looking at is it was taking in excess of 30 seconds just to turn a layer on or off.  So slow that the program was effectively unusable.

 

The curious thing was the CPU was idle the whole time.  So what is going on? This:

I limited my testing to a single case, changing layer visibility.  Every single layer visibility change was taking ~30 seconds.  Horrifically slow.

Every time you change the layer visibility CA-X9 saves the undo state, because you might need to undo that.  

The Undo State in my case is 305,347KB yes 300MB.  The data file itself is only 155,868KB.  So the undo is ~ twice the size of the data file.

Every single time i do something that is "undoable" like layer visibility a new 300+ MB file is created and saved to disk.

The CPU sits idle this whole time.  Perfmon reports that for a test run of 4 Visibility changes CA was writing 9 writes a second to disk and each write is about 267Kb in size.  and cpu time was a whole 1.9% for CA.  CA is bogged down writing these undo files.  

 

Why is it soooo slow for my friend.  His SSD's aren't beasts but they are OK.  However his C: drive is about 70% full and SSD's will slow down the fuller they get.  A 305MB file is a huge allocation, and as each undo state is a brand new undo file, every 3 undo's he is down 1GB of disk space.  Also, writing 305MB files for every edit action and then deleting them once the undo level is reached and writing them again, for every single edit, is slowly but surely wearing out his SSD and making it slower and slower in the process.

 

Worse, if CA runs out of disk space for undo files, which could easily happen for him because his SSD's are only 128GB each, it crashes.  Hard crash.  

 

So i did some more analysis, what is in these massive undo files?  Well between each of them, for a layer visibility change exactly four bytes changes.  That right 32 bits.

 

fc /b undo_TestHouse_3512_14.undoplan undo_TestHouse_3512_15.undoplan
Comparing files undo_TestHouse_3512_14.undoplan and UNDO_TESTHOUSE_3512_15.UNDOPLAN
094F5227: 03 02
12A0D5D3: 02 03
12A308C8: 5E 6D
12A30A33: 6D 7C

Yes, CA records 305MB of data to remember that 4 bytes of data changed.

 

This seems a waste of resources and massive (and variable) speed penalty to inflict on users.  Not to mention the progressive and unnecessary damage such massive writes will do to SSDs over time.  

 

But there is a workaround:

1. Buy lots of ram.  My friend has 16GB, if you are doing lots of plans I would recommend at least 32GB and better 64GB.

2. Download ImDisk and set up a ram drive, you need at least as much space as two times the size of your biggest plan file times the number of undo levels you want.  (Use any ram disk software, i just found this one first) 

3. Go into CA and Preferences -> General -> Folders

Change BOTH "My Temporary Folder" and "My Undo Folder" to point to the new ram drive.

4. Go into Preferences -> General

Change Undo levels to a small number that is enough that you wont fill your ram disk.  (I chose 10)

 

I would also turn off Auto Save and set Auto Archive Files to  "Previous Save" in "General -> File Management" but they are not necessary for this speed up.

 

Doing this, layer changes went from 30 seconds to almost instantaneous.  The WHOLE PROGRAM is now phenomenally faster.

 

Alternatively, just turn off undo altogether.

 

  • Like 2
  • Upvote 2
Link to comment
Share on other sites

I am a software developer as well as Chief user and did some analysis on file activity a while back, initially for library search performance issues.

 

I saw the same heavy writes for undo, but given I never have much of a delay for undo (my plans are normally 75 - 100MB) I just moved the temporary and undo folders to a dedicated small but very fast SSD I use for many applications that write too frequently affecting performance. I have enough memory for a RAM drive and used to use a commercial one with persistence options, but with the reasonable cot of small fast SSDs its no longer the best option for me. 

 

There are many major applications that perform too much file I/O (Chrome, Norton, Office) and also often overlook that anti-virus applications can scan some of these files on every write, so Chief is not alone here.  I do think its unfair to suggest CA developers learn about linked lists as Chief is a complex application and clearly their developers have skills, but even the best companies don't always prioritise certain issues or realise the real world impact. I specialise in highly scaleable web applications and find such issues all the time.

 

There are many other areas of Chief where it appears some profiling and general performance analysis is in order, the needless full 2D redraw discussed in another thread recently and some far from sub optimal file usages around the library for a start. 

 

I suggest you make a post in the suggestions section about this, or raise a ticket.

Link to comment
Share on other sites

I appreciate the time, care and intelligence you both exhibit with this thread. I have been an end user of this software since 1994 as a professional designer/drafter/teacher but I do not have any background in computer language or function. I know what I know through daily use of the application to get professional grade products. I have deadlines to meet. Commonly a plan set that will be acceptable to building permit authorities (.plan and, layout files) to total around 7-10 Mb whether the project is a custom home or commercial project. The creation of plans requires very little embellishment (furniture, plants, people, animals etc.). What is required for building purposes to communicate to all fundamental trades is all that is required. So in my years of using Chief, I have NEVER experienced such laggy behavior except on two or three extremely large projects (a commercial building of multiple stories totaling over one million square feet).

 

I stopped using auto-save back in 1999, rather I manually save every few seconds as I work. Only for flashy display renderings do I add plants, people, brick-a-brack and other non-essential objects as these tend to slow my workflow down while I wait for my PC's hardware to process such additional unnecessary objects 3D faces per unit of time. Such creations are merely for fun or for promotional purposes, not making money per se.

 

I have observed that amateurs, casual users, hobbyists and doodle-dadlers add this level of unnecessary detail by default, making their plans a bear for anything other than a supercomputer. I am not saying this should not be done, I am saying that such users commonly have minimal hardware arrayed PC's and so get swamped with sluggish performance. Professional users require completed products as opposed to "works of Art". I do know a few users who specialize in this kind of product with no complaints about slowness from them.

Basically, you have confirmed what I already observed from my experiences. Thank you for your efforts, I just wanted to also add that not everyone uses Chief Architect software for Lucas Film level graphics.

 

DJP

  • Like 1
  • Upvote 1
Link to comment
Share on other sites

  • 2 weeks later...

Just a Note on Using Ram Drives with CA , I set one up last week as a Test after reading this thread, but they will stop the Library Update working properly , as evidenced by my self and Roland who found the Issue here :                                  (***switching the Temp drive back to a HDD for the Update works to fix this)

 

 

Link to comment
Share on other sites

  • 2 weeks later...

FYI ...another followup on using IMDisk  , ( see post above too) for me a 2nd issue has arisen , IMdisk was stopping Windows Update from Working , I suspect something do do with the Windows Temp File/Folder , but once the Ram Disk was unmounted and IMDisk uninstalled the Win10 Updates installed as normal again.

 

seems to be an issue for others as well

https://sourceforge.net/p/imdisk-toolkit/tickets/9/

 

 

 

On 4/4/2018 at 12:35 AM, StevenJ said:

 

But there is a workaround:

1. Buy lots of ram.  My friend has 16GB, if you are doing lots of plans I would recommend at least 32GB and better 64GB.

2. Download ImDisk and set up a ram drive, you need at least as much space as two times the size of your biggest plan file times the number of undo levels you want.  (Use any ram disk software, i just found this one first) 

3. Go into CA and Preferences -> General -> Folders

Change BOTH "My Temporary Folder" and "My Undo Folder" to point to the new ram drive.

4. Go into Preferences -> General

Change Undo levels to a small number that is enough that you wont fill your ram disk.  (I chose 10)

 

I would also turn off Auto Save and set Auto Archive Files to  "Previous Save" in "General -> File Management" but they are not necessary for this speed up.

 

Doing this, layer changes went from 30 seconds to almost instantaneous.  The WHOLE PROGRAM is now phenomenally faster.

 

Alternatively, just turn off undo altogether.

 

 

Link to comment
Share on other sites

  • 1 month later...

I had installed Radeon RAMDisk a few months back and tried this, and it didn't make any noticable difference, so I completely discounted using RAM disc.  On a whim I just tried ImDisk as you did, and got an improvement from 3 seconds to place a cabinet to almost instantaneous.  Those 3 seconds add up.  If I'm making 10 edits per minute, that is half a minute of waiting for chief to respond.  And while I don't see myself working at a rate of 10 edits per minute on a regular basis, I would guess that this could easily account for a 20% speed improvement in my work (not to mention reduced frustration).  Time is money...

 

And the interesting part:  that 3 second time loss was only apparent when I had a 3D view open.  Without the view open it was almost as fast as with the ramdisc.  Something is really odd about that, and I checked it with auto rebuild walls/floors/roofs disabled and it didn't matter.  Chief really needs to optimize this better.  Between this and a huge backlog of cad blocks, this program becomes a turtle in no time.

Link to comment
Share on other sites

I just did a quick test over a minute of my most efficient work, with the ramdisc running, and I performed 14 draw/edit operations.  If I had chief running without a ramdisc, it would have taken me at least another 30 seconds to perform all those tasks.  That is a 33% time saving, over the course of an entire day that could be huge!  I definitely reccomend this to anyone who wants to speed up their workflow.

Link to comment
Share on other sites

On 4/25/2018 at 8:58 AM, Kbird1 said:

FYI ...another followup on using IMDisk  , ( see post above too) for me a 2nd issue has arisen , IMdisk was stopping Windows Update from Working , I suspect something do do with the Windows Temp File/Folder , but once the Ram Disk was unmounted and IMDisk uninstalled the Win10 Updates installed as normal again.

 

seems to be an issue for others as well

https://sourceforge.net/p/imdisk-toolkit/tickets/9/

 

 

 

 

 

I wonder if you decline the option in ImDisk to create a Temp folder, and manually create a Temp folder on the ramdisc just for Chief, would Win 10 updates work fine?  I'm guessing that the temp folder option in ImDisc somehow tells Windows to use that location instead?  I'm only shooting in the dark here, as I'm not super savvy on these sorts of things.

Link to comment
Share on other sites

On 4/25/2018 at 8:58 AM, Kbird1 said:

FYI ...another followup on using IMDisk  , ( see post above too) for me a 2nd issue has arisen , IMdisk was stopping Windows Update from Working , I suspect something do do with the Windows Temp File/Folder , but once the Ram Disk was unmounted and IMDisk uninstalled the Win10 Updates installed as normal again.

 

seems to be an issue for others as well

https://sourceforge.net/p/imdisk-toolkit/tickets/9/

 

 

 

 

 

Hmm, I may be on to something here.  It looks like ImDisc instructs the system to use its Temp folder.

Untitled 1.jpg

Link to comment
Share on other sites

8 hours ago, KervinHomeDesign said:

 

I wonder if you decline the option in ImDisk to create a Temp folder, and manually create a Temp folder on the ramdisc just for Chief, would Win 10 updates work fine?  I'm guessing that the temp folder option in ImDisc somehow tells Windows to use that location instead?  I'm only shooting in the dark here, as I'm not super savvy on these sorts of things.

 

You would have to create the Folder every time you booted the computer , before starting CA , as the RAM Disk is Created at Boot Time and is not persistent.

 

For the last 5 years I have had a CA_Temp Folder and a CA_Undo Folder in the Root of C Drive for CA to Use as C is my NVME SSD which is pretty quick but not instantaneous and that has allowed me to use Undo, something which is useful to me , in case I get down a "rabbit hole" trying to fix something....

 

There maybe other RAMDisks available which work differently , I have not had time to look into it further...

 

M.

 

 

Link to comment
Share on other sites

I have also been using a specific folder for Chief undo to an NVMe SSD and as @Kbird1 said its easy enough to create a folder on a RAM disk at boot even if the RAM disk software itself doesn't provide the facility. The last time I reviewed RAM disks some had options and hooks to prepare and/or persist certain files around boots, but in this case a simple command (batch) script can create a directory for Chief on each boot.   

 

As I work in software I decided to monitor the behaviour of Chief's undo with appropriate tools during a couple of hours work and it is indeed very heavyweight.  I have developed undo/redo for complex applications with multiple users and highly linked data so I am aware of the challenges, but even so some things in Chief are surprising. During that session Chief wrote more data to my SSD than anything else and that includes Chrome with loads of tabs (notorious for its cache), Norton and various other development tools and running applications including a virtual machine.

 

From that session I noted:

  • It appears that a few operations such as object moves have been optimised to store a relatively small undo file (2 - 10MB).  Some operations are obviously trivial to reverse whereas others will require data lost by the change so perhaps this was the undo optimisations mentioned in X9 or X10 (can't remember which). 
  • Most operations trigger the storing of an undo file roughly double the size of plan file as noted in the OP.
  • The large undo files are stored even when clicking OK to a dialog with no changes.
  • There are many operations that trigger the large undo file but appear easy to reverse such a toggling layer visibility triggering the larger undo file, changing a setting in defaults etc.
  • The binary contents of the undo files are easy to compress and/or reduce size during writing (I've not checked into whether these contain any SQLite tables frequently used by Chief or just another representation of the plan)
  • I generated 3.5GB of undo files from a 65MB plan file using the default 50 levels of undo so I wouldn't want to work on monster plan!
  • I tried a short session using a medium speed hard disk for undo instead of my fast NVMe and it was so painful I gave up.   I've always used NVMe SSDs with Chief and I had no idea it could be this slow. 

As a software developer I do appreciate some things are not as easy as they seem and a much more efficient undo/redo may impact Chief's whole internal storage model. However there appear to be some potential ways to improve speed of the current system without major work, e.g.:

  • Use an internal memory cache with a user configurable size (so in effect an internal RAM disk) and when push the oldest undo files to disk. 
  • Increase the number of operations that don't need to store the large undo file.  It seems terrible that a bit of layer visibility causes that large file per layer toggle (unless you're really quick).  
  • Run the data through a fast low overhead compression (LZ4 or similar) before storage.  There's always a trade off between CPU and disk speed especially against NVMe SSDs but compression can be done in a background thread or threads and only used when beneficial.  As a quick test a single thread LZ4 ran at 700MB/s on on core on my oldest dev PC and reduced the file to 40% of the original.  Good NVMe SSDs can manage 2000+MB/s write speed but even single threaded this would be useful for caching in RAM to greatly increase storage and on disk still be better for most users. Some of the applications I've written test disk write speed to ascertain whether to use compression or not as applicable.
  • Some undo files as noted in the OP, are very similar. It may be viable to use a differential algorithm to only store differences between a sequence of similar undo files but would need code to ensure the base undo is retained or rolled forward as needed.

I have run on way more than intended and have some other work to do now :).  Suffice to say the current undo/redo system appears to need major improvement but I hope we can get some quick fix improvements while we wait. 

  • Upvote 2
Link to comment
Share on other sites

Whenever I've looked at the undo function my sense has been that the majority of time is taken up by the 3D model rebuild, especially on complicated plans. I have my undo set to 10 levels, in most cases I only need to do 1 or 2 undo's, 50 seems overkill, you can always just call up the backup plan. A Ram Disk might help but this only impacts on the file read/write times. You will also need to be careful sizing a ram disk as this will reduce the available ram for other operations, could force your system to resort to a swap file(superfetch) which will potentially slow down other operations.

Link to comment
Share on other sites

2 hours ago, TheKitchenAbode said:

Whenever I've looked at the undo function my sense has been that the majority of time is taken up by the 3D model rebuild

 

If you have a 3D view open then yes that's adds more time, but for large plans the file operation can be much slower than the redraw (unless in PBR mode).   When using a RAM disk this can be used for Chief undo only to avoid the issues you mention.

 

The 3D speed remind me of another performance issue which I've reported which is that having a 2D and 3D view of the same plan on the same level causes a slow down when moving in 3D. The cause appears to be the 2D plan regenerating/drawing for every camera move whether the camera symbol is shown or not. This is very noticeable with a 3D mouse due to the ease of movement and the workaround is to move the camera to another floor and then move it in 3D back to the applicable floor so the plan considers it on another floor.   As a quick fix am sure CA could either make the camera symbol and overlay to avoid this or even just throttle 2D updates (say once per second) when moving in 3D.

 

My day job is software performance and scalability so please excuse all the detail  :)

Link to comment
Share on other sites

The 3D regeneration is good and bad, as it constantly keeps the 3D current then there is no delay when switching between active cameras. Of coarse the drawback is that this may cause some slowdown when one is working primarily in a 2D view. I believe one of the things is that with CA you are really always working in 3D, the plan and 2D elevations are provided as there are many times when this type of view is a more conducive way to layout particular elements.

Link to comment
Share on other sites

13 hours ago, Kbird1 said:

 

You would have to create the Folder every time you booted the computer , before starting CA , as the RAM Disk is Created at Boot Time and is not persistent.

 

For the last 5 years I have had a CA_Temp Folder and a CA_Undo Folder in the Root of C Drive for CA to Use as C is my NVME SSD which is pretty quick but not instantaneous and that has allowed me to use Undo, something which is useful to me , in case I get down a "rabbit hole" trying to fix something....

 

There maybe other RAMDisks available which work differently , I have not had time to look into it further...

 

M.

 

 

 

Both the ImDisk and the Radeon Ramdisk that I've tried have a feature that saves the drive as a disk image on shutdown and loads it again on startup.  This is almost instant for all purposes, and saves you the time to set up the folders each time.  Just as long as you don't open Chief while the drive is unmounted; if you do, Chief wil reset the temp folders to default.

 

I wish I had found out about NVME drives when I researched the system that I built.  I would have gone with a 960gb Pro Samsung M.2 instead of the EVO 500gb.  I still could upgrade if I wanted, but after all the cash I spent on this system I think I'm good for now.

 

6 hours ago, Smn842 said:
  • The large undo files are stored even when clicking OK to a dialog with no changes.

 

This needs to be adressed, IMO.  If Chief can recognize that nothing has changed, there should be no need to record an undo state.  I noticed this behaviour many years ago.

 

Quote
  • Use an internal memory cache with a user configurable size (so in effect an internal RAM disk) and when push the oldest undo files to disk. 

 

I was thinking the same thing earlier, and I would really love this.  I don't see it as being super difficult to implement, as long as it is an option so that users with limited system resources could choose between caching undo states to disk or ram, and like you said, specify the maimum amount of ram that Chief could use for this.

 

 

5 hours ago, TheKitchenAbode said:

You will also need to be careful sizing a ram disk as this will reduce the available ram for other operations, could force your system to resort to a swap file(superfetch) which will potentially slow down other operations.

 

I upgraded my system from 16gb to 32gb of ram just for this purpose.  I'm currently running a 3gb ram disk, which I've determined is enough for the file sizes I work with, but I could go up to 16gb and still have half of my ram dedicated for general use by the system.  I've monitered my ram usage without a ramdrive running, and I don't think I've ever used more than 10gb.

 

Also, forgive my ignorance of how software runs, but it seems to me that chief is simply making us wait for it to write each undo state before we can continue working.  Whether it's 3 seconds for me, or 30 seconds for the OP's client, Chief seems to be saying "hold on, don't do anything until I write an exact copy of what you have here to disk in case you change your mind."  And I've observed, much as the OP did, that the CPU runs virtually idle while Chief does this.  Can Chief's processes not be designed to run undo save operations in the background while the user works uninterrupted?  If the speed of writing to disk is slowing us down, why does the program make us wait for that?  This doesn't make sense to me.  Now, for a plan that requires 30 seconds to save an undo state, that could create quite a backlog, I would admit.  I don't work with plan files nearly that large:  My largest files are for a 4 plex and a very large custom home, and each were no more than 24mb each.  Even at that file size, small in comparison to the OP's, Chief did seem to move like frozen molasses out of a glass ketchup bottle.

 

 

Link to comment
Share on other sites

My curiosity has been arisen. I have just completed some experimentation concerning this undo. I have a standard SATA hard drive and an Intel Optane M.2 Drive. Using the CA Hillside Contemporary plan I assigned in CA my undo folder to the SATA hard drive, did a number of alterations and undo/redo's. I then assigned the undo folder to my Intel Optane drive and repeated the process. While doing this I monitored disk access, memory and cpu using a reasonably comprehensive monitoring program and also monitored the file activity in the undo folder. In all honesty the difference in response time was hardly detectable even though my Intel Optane is likely more than 50 times faster than my SATA hard drive. When doing an undo or redo I would estimate that it took in most cases 1 second or less, the max I encountered was maybe 2 seconds, the results where the same whether I was making the change in plan view, elevation or a standard 3D view. It also seemed that CPU time was always considerably more than disk access time, if I were to equate this in respect to the 1 second time I would estimate that of the 1 second CPU time represented at least 75% of it. 

 

When monitoring the undo folder CA created a file each time I performed an action. For actions such as moving an object, deleting something or changing it's size by dragging CA created a relatively small file, between 200 KB- 1400  KB. If I opened an object DBX and closed it the saved file was 85,000 KB, obviously saving the entire plan. It also saved the entire plan when using something like the material painter. Turning layers on or off also resulted in a full save.

 

As actions where being performed and undo/redo's CA would automatically purge the undo folder. I did not find any difference in individual und/redo times in respect to the total number of files in the undo folder.

 

It appears that CA's undo/redo process is based on a last in first out bases. Total undo/redo time is going to be related to how many levels of undo/redo's one needs to perform. If each undo takes 1 second then if you need to go back 10 levels then that's going to take 10 seconds. Unfortunately there is no means to jump the que and select a past change.

 

My conclusion is that the slowness being encountered seems to be more complex than just read/write times. My sense is that there are significant CPU operations involved in preparing the data for writing and putting the model back together after a read.

Link to comment
Share on other sites

44 minutes ago, TheKitchenAbode said:

My conclusion is that the slowness being encountered seems to be more complex than just read/write times. My sense is that there are significant CPU operations involved in preparing the data for writing and putting the model back together after a read.

 

I agree generating such complex and large undo files is CPU intensive, but with a large enough plan and the double size undo files then disk I/O becomes a major factor in my test. Switching between a fast NVMe SSD and a good HD for undo was very noticeable and painful with 130MB - 200MB undo files (twice plan size).

 

Without profiling in more detail it feels like there isn't too much CPU time streaming out the undo file (with my few  of amateur plans at least) but certainly a lot restoring a model which is logical. I don't mind a bit of slowness undoing, but not the impact for every operation and the amount of disk space required is also excessive. Ultimately CA need to reduce the undo file size which reduces all overhead, lets hope they look at this for the next release.

  • Upvote 1
Link to comment
Share on other sites

2 hours ago, TheKitchenAbode said:

My conclusion is that the slowness being encountered seems to be more complex than just read/write times. My sense is that there are significant CPU operations involved in preparing the data for writing and putting the model back together after a read.

Reead and write times seem to be very critical for me, as simply changing my temp folder location to the ramdisk has sped up almost all edits and operarions form 2-3 seconds to near instant.  You are probably more qualified than I to speak to other bottlenecks and speculate about other ways to optimize this, but it seems that for the average user with enough ram, a ramdisk is a simple solution.

Link to comment
Share on other sites

1 minute ago, KervinHomeDesign said:

Reead and write times seem to be very critical for me, as simply changing my temp folder location to the ramdisk has sped up almost all edits and operarions form 2-3 seconds to near instant.

 

Not saying that there is no improvement, but as in your case I think if you need to knock down the lag by a second or so then this should help to do the trick. If you mathematically access things from a read/write perspective you can conclude that type or improvement. At best, a standard SATA might read/write at about 150 MB/sec, a decent SSD will get you about 300 MB/sec, the best NVME might reach 600 MB/sec and a ram disk would likely do this in a millisecond. However, in all cases a 200 MB CA plan file should read in less than 1.5 seconds. Just suggesting that if one such as the OP stated is experiencing a 30 sec lag when turning off a display layer then there must be something else going on.

Link to comment
Share on other sites

1 hour ago, Smn842 said:

 

I agree generating such complex and large undo files is CPU intensive, but with a large enough plan and the double size undo files then disk I/O becomes a major factor in my test. Switching between a fast NVMe SSD and a good HD for undo was very noticeable and painful with 130MB - 200MB undo files (twice plan size).

 

Without profiling in more detail it feels like there isn't too much CPU time streaming out the undo file (with my few  of amateur plans at least) but certainly a lot restoring a model which is logical. I don't mind a bit of slowness undoing, but not the impact for every operation and the amount of disk space required is also excessive. Ultimately CA need to reduce the undo file size which reduces all overhead, lets hope they look at this for the next release.

 

You can always go into the undo folder and delete the leftover stuff, not sure why CA does not do this when a plan is closed.

 

Concerning the impact of say 50 undo files, I checked my main C: drive and there is more than 185,000 files on it. Not sure 50 more has any real impact concerning overall read/write performance.

Link to comment
Share on other sites

12 minutes ago, TheKitchenAbode said:

 

Not saying that there is no improvement, but as in your case I think if you need to knock down the lag by a second or so then this should help to do the trick. If you mathematically access things from a read/write perspective you can conclude that type or improvement. At best, a standard SATA might read/write at about 150 MB/sec, a decent SSD will get you about 300 MB/sec, the best NVME might reach 600 MB/sec and a ram disk would likely do this in a millisecond. However, in all cases a 200 MB CA plan file should read in less than 1.5 seconds. Just suggesting that if one such as the OP stated is experiencing a 30 sec lag when turning off a display layer then there must be something else going on.

You are probably right that there is something else going on.  But the Ramdisk seems to take a huge chunk out of the problem right off the bat.  For a write time of 1.5 seconds for a 200mb file, that adds up to a lot of time over the course of a day, and even that 600 MB/sec NVME drive is a bottleneck compared to the ramdisk (0.33 seconds vs. 0.001 seconds.)  That Ramdisk takes less than 1% of the time the NVME drive takes.  That is a MASSIVE time gain over a full day's work.  Now that being said there are other processes than just the writing of the undo file, and by no means do I find the Ramdisk solution to reduce times to a millisecond.  There is still a very slight delay, less than a second though, down from 2-3 seconds.  But it is below the threshold of intolerable, which is probably just short of 1 second.

 

I should also note that my files don't even come close to 200mb; my largest ever is 24mb, and I sitll get slowdowns up to 3 seconds per operation.  Closing all 3D views improves it much, but the Ramdisk is smoothest and allows me to have 3D views open when I work.

Link to comment
Share on other sites

2 minutes ago, KervinHomeDesign said:

You are probably right that there is something else going on.  But the Ramdisk seems to take a huge chunk out of the problem right off the bat.  For a write time of 1.5 seconds for a 200mb file, that adds up to a lot of time over the course of a day, and even that 600 MB/sec NVME drive is a bottleneck compared to the ramdisk (0.33 seconds vs. 0.001 seconds.)  That Ramdisk takes less than 1% of the time the NVME drive takes.  That is a MASSIVE time gain over a full day's work.  Now that being said there are other processes than just the writing of the undo file, and by no means do I find the Ramdisk solution to reduce times to a millisecond.  There is still a very slight delay, less than a second though, down from 2-3 seconds.  But it is below the threshold of intolerable, which is probably just short of 1 second.

 

I should also note that my files don't even come close to 200mb; my largest ever is 24mb, and I sitll get slowdowns up to 3 seconds per operation.  Closing all 3D views improves it much, but the Ramdisk is smoothest and allows me to have 3D views open when I work.

 

Completely agree, 1 or 2 seconds is like eternity when it comes to computer response time. As I mention, I'm curious about this, will download the software and give it a try.

  • Upvote 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share