Jump to content
xisto Community
qwijibow

How To Help Reduce Disk Fragmentation

Recommended Posts

Today, the microsoft windows operating system is the only OS so suffer from disk fragmentation and need constant maintenance with a disk "defragmentor"their are 2 main reasons for this...1) The file system, NTFS is FAT32 are seriously out dated. reiserfs, Xfs and Jfs would be much better solutions, but unfortunatly, microsoft are planning on sticking with NTFS for longhorn.there is nothing we can do about this.2) The SWAP file !first a definition of the swap file (sometimes called virtual memory)All running programs need to be stored in RAM. to keep the RAM clean, and maximise the amount of programs you can have running, RAM which hasnt been accessed fro a while is removed from ram, and swapped onto the hard disk, when it is next needed, it swapped back in. basically, it allows you to use a much slower, but larger hard disk, asif it were RAM memory.now, the porblem.the swap file is a single file which can often grow upto a gigabyte in size. this file is constantly growing, shrinking, moving. which causes it, and all other files on your disk to become fragmented.the swap file is a NIGHMARE.the solution.Other OS's like Unix, BSD and Linux have a simple yet highly efficiant way to completely prevent this kind of fragmentation...they dont have a swap file, they have a swap partiton.unlike a file within the main file system, a partiton is a seperate area of the disk, completely unrelated to the main root filesystem.in the old days, a swap partiton was annoying because a partiton always takesa certain amount of space, it cannot grow or shrink like a file. so if you have a 1gig swap partiton, you will always have 1 gig missing from your disk, even if the swap is unused. however today, a singl;e gigabyte is nothing, hard drives are massive. the difference between a 40gig and80gig disk is £10.implementing the solutions...1)The easy solution would be to simply to turn the virtual memory / Swap Off in the memory management of the ocntrol panel, however unless you have alot or ram, you run the risk of crashing if your memory is dipleated.2) add a second hard disk, prefferable ona different IDE channel an use that as purely swap.3) the super fun cool hack way....re-partiton your windows disk, and add a swap partiton.to do this without having to format the disk you will need a liveCD called knoppix (http://www.knoppix.org/)burn the cd iso to a cdrom and boot it.you need to do this because you cannot repartiton a mounted disk.in the main menu once you have booted the disk is a porgram called qt-parted.this is a graphical forontend to a free partiton tool called "parted"its very simple to use, shrink your windows partiton by the amount of swap you want to use... 1Gb should be more than enough unlesss you do some hard core multimedia editing.in the empty space at the end of your disk, add a new partiton and format is as "vfat" which is anouther name for "fat32" fat32 is flaster than NTFS, and because the partiton is only 1Gb in size, we dont needs NTFS's abilites to hold files greater than 4GB.re-boot windows, in the cotrol panel, set the swap files ocation to the newly grated partiton (probably labeled D:/)THEN boot knoppix again, and delete the old swap file from windows C:/ disk.THEN defragment your both partitons.you should now dotice then the main partiton needs defragmenting less often.

Share this post


Link to post
Share on other sites

what is defragmentation anyway? isit the files being split all over the hard drive so when it loads up it takes much longer to find the file parts. after i used disk fragmentation the computer will run faster and reduce any errors. i correct?

Share this post


Link to post
Share on other sites

what is defragmentation anyway? isit the files being split all over the hard drive so when it loads up it takes much longer to find the file parts. after i used disk fragmentation the computer will run faster and reduce any errors. i correct?

<{POST_SNAPBACK}>

Yups absolutely.. Degragmentation is the act of putting all those scattered file parts into ONE BIG CONTIGUOUS block so that your HDD Head can read the whole file in ONE Sweep rather than jump around all over the hdd to find the fragments - which of course would cost it a hell lot in terms of read time. If you want to find out the exact definition I suggest you to visit https://www.wikipedia.org/ and search for this particular term. ;)

Share this post


Link to post
Share on other sites

My personal favorite method of preventing disk defragmentation is something I'm using in the OS I'm writing. Simply move the files on disk when deleting a file. Or if, like windows does, you have every file stop and start on specified boundaries, only save the file to a spot that has a enough contiguous boundaries (which windows doesn't).

Share this post


Link to post
Share on other sites

Im no expert.... but i think that would seriously damage disk throughput performance.what filesystem are you planning on using ???ive heard very good things about XFS, JFS, and reiserfs.

Share this post


Link to post
Share on other sites

I don't know yet, I'm still in the developement stage. But yeah, constantly moving files will damage the disk over a long period of time, but the time period is so long that it won't cause noticeable problems for anyone who uses the disk. Basically what it is a mini defrag, except the only thing being moved is empty space, not chunks of files, so it takes much less time because a search doesn't need to be done for the file chunks. Another advantage is you leave no traces of the file that originally existed. And my OS is actually going to have several deletes, one removes the file from the file tree, the second overwrites it with 0s, and the third does the mini-defrag. The user can pick a default, or pick one to use at any specific delete.

Share this post


Link to post
Share on other sites

I think in windows you can configure the swap file to a fix size. For exaple you set min = 1 gig , max = 1 gig , No changing size of pagefile .Sys => no fragmentation No need to create a prtition. -toot

Share this post


Link to post
Share on other sites
How To Help Reduce Disk FragmentationHow To Help Reduce Disk Fragmentation

qwijibow is terribly misinformed.

1. Linux, BSD, and Mac OS do suffer from a different, but equally serious form of fragmentation called *disk* fragmentation (as opposed to *file* fragmentation in Windows). Coincidentally this is a side effect of them trying to avoid file fragmentation. They avoid file fragmentation by having long read ahead and write behind times. This means that the system keeps the file in RAM much longer before finally writing it to the disk. This usually does keep the file from being fragmented, but it causes a problem when a large number of files are on the disk, as all the free space of the disk is split up in tiny little bits thoughout the disk. This can cause some very serious problems when the disk beings getting full or when writing very large files to the disk at once.

2. NTFS is by no means outdated. NTFS was the first 64 bit File System (all current filesystems except ZFS to my knowledge are also 64 bit as well), NTFS is Journaled just like EXT3/4 and HFS+, NTFS supports on disk compression (zip compression without actually making a .Zip file), permissions built into the filesystem (something that haunts Mac OS, look up Mac Permissions problems), quotas that EXT3 (I don't know about 4) and HFS+ don't use, and variable cluster size.

Also, in qwijibow's swapfile hack, he states FAT32 is faster then NTFS, this is false. NTFS is much faster and also safer in the event of a power outage or virus attack.

3. Fragmentation in Vista and newer isn't nearly as prolific as it is in XP if the system is configured to stop disk buffer flushing. The disk buffer is a section of high speed but temporary memory used to hold files that are being read/written to the disk. As a security measure to avoid corruption in the event of a power outage, windows occasionally empties the buffer and writes it all to the disk. This can increase fragmentation by giving the system insufficent time to find empty space however. To change this in Vista+ is very simple. Start>right click "Computer">Manage>(allow UAC if it asks)>Device Manager>expand "Disk Drives">right click the hard drive and select properties>open the Policies tab>Select (in Vista) Enable enhanced Performance or (in Windows 7) Disable disk buffer flushing (Sorry if I'm a little off on names, I'm doing this from memory).

4. Cluster size can significantly change the ammount of fragmentation a disk suffers from. A cluster is a segment of data on the disk to store a file in, however each cluster can only hold one file (or part of one file), therefore a cluster size that's too big. For example; Your disk has a 16 Kilobyte cluster size (NTFS default is actually 4 Kilobyte I think) and you have a text file that's only 3 Kilobytes, now 13 Kilobytes has been wasted in space by that cluster. Now, on the flip side of this argumnet, a cluster size that is far too small (NTFS can go down to 512 bytes if memory serves) will greatly increase the number of clusters any single file will take up, and thus increase it's chances of being fragmented. There is, to my knowledge, no way to change the size of the cluster size during the instalation of Windows, so it's advised that you prepare the disk ahead of time with a utility like Partition Magic.

5. Swapfile (known in the Windows world as the pagefile or Virtual Memory). The author is kind of bending the truth here. Files in RAM with swap file too small would need to be more often written to the disk, which increases the chance of fragmentation. The swap file, is by default a variable size, and this however, isn't the best setting to leave it on. I'll show you how to change the swapfile's overall, initial, and maximum size for XP (Vista and 7 users, imporovize). Start>right click My Computer>Properties (make note of how much RAM your system has)>Advanced tab>under the Performance section, click Settings>Advacned tab>under Virtual Memory section, click Change>tick Custom Size.

Before you enter a number, you need to know how much RAM you have. By the rules of thumb, XP should have about 1.5 times the amount of RAM it has unless your system has more then 1.5 gigabytes of RAM. If you system does have 1.5 gig or more of RAM, then the Virtual memory should be a 1 to 1 ratio. For example;

756 Megabytes of RAM=about 1400 Megabytes of Virtual Memory

2 Gigabytes of RAM=about 2000 Gigabytes of Virtual Memory

To prevent the system from contantly needing to resize the swapfile (on XP, the file is actually called Pagefile.Sys), make the initial and maximum sizes the same.

6. Moving your pagefile off your system disk can increase performance, but the author needs some correction.

a. Partitioning a disk with an NTFS partition on it is not recommended, this will actually reduce performance overall. Leave the disk with one big NTFS partition if you can.

b. Moving it off the system disk is a good idea if you have the option. It should be moved to an NTFS partition on a different Hard Drive that is connected to your mother board with it's own cable. Placeing the pagefile on a seperate disk that's on the same ribbon cable as the systemdisk won't improve performance much at all. In fact, using one ribbon cable for more then one drive isn't recommended at all.

c. DO NOT delete pagefile.Sys off yoru system disk even if you make another one. Go into Advanced system properties to change the size (as I described above), select your systems C:/ disk, and tick "No paging file" and restart. Deleting the Pagefile without doing it though windows can severly cripple your system, because windows will attempt to find pagefile.Sys during it's early boot sequences and when it won't find it, it'll give you a BSOD, and safemode won't work without a pagefile.Sys either.

7. Pagefile fragmentation can be easily solved. Download a program called (this is for XP only) pagedefrag from the sysinternals website. Extract all the files from the zip to the root of your C:/ disk. Open pagedfrg.Exe and tick "Defragment Every Boot" and change the defrag abort countdown to a lower number (you pick).

Restart, and ta-the, the pagefile and many other systemfiles that can't be defragged when the system is running are now optimized.

8. There are some simple ways to significanly speed up disk access if it's your machines main slowdown. Two easy registry tweaks.

a. Disable DOS file naming (don't do this if you use *very* old programs or any Norton program).

Start>Run>type regedit and press enter>Open up HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem.

If you find the entry named NtfsDisable8dot3NameCreation, skip this part. Right click a blank space in the right pane>New>DWORD Value. Name the new value "NtfsDisable8dot3NameCreation" without quotes.

Right click the value NtfsDisable8dot3NameCreation and select Modify, in Value Data put "1" without quotes.

Follow all these same steps only with a DWORD Value called "NtfsDisableLastAccessUpdate" and NTFS will stop tracking a files last access date. This can sometimes keep certain Defragment programs and Antivirus utilities from working well, but by in large, it works ok.

Restart.

I could go into way more ways to sqeeze a little more perfomance out of NTFS or Windows, but I think this post is long enough.

-reply by Link9454

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.