Jump to content
xisto Community
Sign in to follow this  
rayzoredge

Advantages Of A 64-bit Operating System And Software?

Recommended Posts

Not a lot of people know about the advantages AND disadvantages of 64-bit software. With the emergence of 64-bit availability more widespread with Linux, Vista, Windows 7, and Leopard, I think we should discuss a bit more about what 64-bit brings to the table of computing.A lot of us "know" that 32-bit Windows can't support more than 3.5GB (or 4GB, depending on your sources). (By "know" I mean "agree," and by "agree" I mean I-read-this-on-Wiki-and-a-ton-of-our-peers-say-so. :P ) A search for the advantages of 64-bit usually yields this basic, one advantage, but it also leaves us in the dark about everything else that 64-bit brings to the table. I dug a little bit deeper with my Google-fu and found some comments and spatterings of knowledge on the Web about the lesser-known facts about 64-bit. And yes, I'm only putting out what I "know," which really comes down to "what I agreed with" or "what I read."? :) One perspective that a lot of us don't look at is the fact that although 64-bit operating systems allow for support of more than 4GB of RAM, we often overlook that the software that we use is still 32-bit. Even with the limitation of 32-bit software not being able to address more than 4GB of RAM, it's not as "bad" as it might seem. One thing that was brought to light for me was that 32-bit software might be limited to not being able to utilize more than 4GB of RAM, but that fact is actually not a limiting factor when utilizing that 32-bit application IN a 64-bit operating system. Since the most realistic scenario for the most of us would include running multiple applications at a time, to include the resources reserved for the operating system, and if you take into account that each application and each process is "limited" by 4GB... well, you get the idea, right? If you have 16GB of RAM in your system, and 1GB of that RAM was being used by the operating system and background processes, you would have 15GB available for your 64-bit program, or you could run a million tabs in 32-bit Firefox and take up to 4GB of RAM with just Firefox and have 11GB left over for any other applications, 32-bit or 64-bit.Another thing that's hidden in the dark is that 64-bit software is more secure. Why? Most malware and malicious code is written for 32-bit software (i.e. Windows XP), and apparently writing code for 64-bit is a bit more difficult considering that you have to write to address 64 bits (integers) instead of 32 bits. Encryption, in this sense, will be more effective. Which brings some cons to the mix...... that 64-bit software is also a bit more difficult to write. Also, since you are now programming in 64 bits (integers) instead of 32, 64-bit software can actually run slower than 32-bit software, since now your 64-bit software is having to address 64-bit address spaces (doubling your memory pointer requirement from 32-bit), requiring more memory AND space to deal with two times the data being worked with. (However, this can be a good thing, as the program is now able to address more, if you look at it from that perspective.) If the 32-bit software you are running now doesn't need to touch any numbers past 4 billion or cannot utilize this to its potential, then there will be no software advantage for that particular program to go 64-bit. Throw in the fact that 64-bit driver support is rather lacking, and you can see why the race to go 64-bit is at a crawl.Please correct my arguments if I am wrong... I'd like to get more of an understanding as to how 64-bit computing can help AND hamper how we do things today.

Share this post


Link to post
Share on other sites

... that 64-bit software is also a bit more difficult to write. Also, since you are now programming in 64 bits (integers) instead of 32, 64-bit software can actually run slower than 32-bit software, since now your 64-bit software is having to address 64-bit address spaces (doubling your memory pointer requirement from 32-bit), requiring more memory AND space to deal with two times the data being worked with.

I've never had this problem. I don't even know if one can even "design" for 64-bit. All the C++ programs i've made compiled fine for either without any change in code, and i've never noticed any decrease in performance. If i'm not mistaken, all you need to do to make a program "64-bit" is to compile it on a 64-bit system. For heavy processing, you may notice that a 64-bit system would perform better for such tasks. By "heavy processing," i mean things like scientific calculations and video production. NASA would (or should) benefit from 64-bit—though it may cost them a lot of money to replace parts.

Share this post


Link to post
Share on other sites

I've never had this problem. I don't even know if one can even "design" for 64-bit. All the C++ programs i've made compiled fine for either without any change in code, and i've never noticed any decrease in performance. If i'm not mistaken, all you need to do to make a program "64-bit" is to compile it on a 64-bit system. For heavy processing, you may notice that a 64-bit system would perform better for such tasks. By "heavy processing," i mean things like scientific calculations and video production. NASA would (or should) benefit from 64-bit—though it may cost them a lot of money to replace parts.

I don't code or program personally so I wouldn't really know how difficult it is to "write" software for 64-bit.
This is how I understood this concept:

A 32-bit piece has up to 32 "slots" in which data can fill; same concept applies to 64-bit. 64-bit pieces can fit two full 32-bit pieces, but we know that not all 32-bit pieces are actually filled to the brim, so there's usually open slots left open. Therefore, 32-bit pieces can more efficiently fit more code without as much open-slot bloat. Maybe I've got that school of thought mixed up...

Your definition of heavy processing would be one of the main reasons why 64-bit processing trumps 32-bit processing, as I'm sure NASA and any other scientific applications, as well as working with large media files, would surpass the 4 billion integer mark that 32-bit is limited to. (But I'm sure they actually have custom software and hardware that takes advantage of 128-bit processing... yes, no?)

The supposed slowdown with native 32-bit applications turned 64-bit would, I think, come from the fact that you are now dealing with larger amounts of data at a time using the "larger" architecture that would have been more efficiently done using a 32-bit system, if we apply my thinking about the 32-bit/64-bit open slot/bloat concept. (Then again, I'm probably wrong on that whole concept to begin with.) Sending 64-bit pieces of data to RAM will fill up RAM twice as quickly than 32-bit (as it takes up a larger memory space), so although 64-bit can support the usage of more than 4GB of RAM, it may not use it as efficiently as 32-bit would? (i.e. with 1023 bits of RAM available, only 15 pieces of 64-bit data can be addressed to RAM as opposed to 31 pieces of 32-bit data, which makes up for 32-bit being a wee bit more than 3% more efficient)

Share this post


Link to post
Share on other sites

Clicky

 

Above is a link to an article talking about the advantage (so far) of 64-bit gaming. Apparently, with newer games that are coming out that are very taxing on your system *coughCrysiscough* the executables themselves are running into problems addressing more than 2GB of memory. If you throw in the sheer amount of detail, textures, and other graphical doodads that make eye candy in games glorious, you can probably see how this is going to be a problem AND a limiting factor that will force game developers to move on to 64-bit platforms, lest they stick with simpler, less-demanding applications or figure a way to address this issue without resorting to large address requirements.

 

What this means, basically, is that games running in a 32-bit mode or natively in 32-bit are bound to crash from lack of memory address space, where the magical number of 2GB comes into play. In the article, some examples are explained with games as old as Command & Conquer: Generals, where I can imagine 8 players building tons of buildings, implementing tons of defenses, and training and constructing massive armies before throwing up their hands in frustration after the executable folds in to the limitations of a 32-bit system and can't do anything anymore.? :) Even with Crysis's map editor, since users are bound to create lush, large maps full of detail, it's almost a given that the address space will be trumped, and thus the developers have made it a 64-bit application, only available to be hosted on a 64-bit server.

 

Throw this into the pro list of having a 64-bit system.? :P

Share this post


Link to post
Share on other sites

... One perspective that a lot of us don't look at is the fact that although 64-bit operating systems allow for support of more than 4GB of RAM, we often overlook that the software that we use is still 32-bit. Even with the limitation of 32-bit software not being able to address more than 4GB of RAM, it's not as "bad" as it might seem. One thing that was brought to light for me was that 32-bit software might be limited to not being able to utilize more than 4GB of RAM, but that fact is actually not a limiting factor when utilizing that 32-bit application IN a 64-bit operating system. Since the most realistic scenario for the most of us would include running multiple applications at a time, to include the resources reserved for the operating system, and if you take into account that each application and each process is "limited" by 4GB... well, you get the idea, right? If you have 16GB of RAM in your system, and 1GB of that RAM was being used by the operating system and background processes, you would have 15GB available for your 64-bit program, or you could run a million tabs in 32-bit Firefox and take up to 4GB of RAM with just Firefox and have 11GB left over for any other applications, 32-bit or 64-bit.

When you say that 32-bit software cannot address more than 4GB of RAM, it is not talking about physical memory. It is simply talking about "addressable" memory. Let me give you an example. Let's say you only have 512MB RAM on your computer. That 32-bit program you have can still write address for all 4GB of address space. What happens is that the addresses are mapped to actual memory slots, so while the program thinks it is reading memory at the area of 3.5GB, it is in fact reading it from the slot at 384MB. Now, while you are correct that the operating system reserves a chunk of memory for itself, this is also in virtual land. Operating systems tend to prohibit application from addressing the first 512mb - 1gb of memory space. So, in reality, your program is only address ~2-3gb of memory space. The reason for the 4GB limitation is that 4GB = 4,294,967,296bytes = 2^32 (hence 32-bits).

 

64-bit space, accordingly, is 2^64 = 17 million GB. So, quite a bit.

 

Again, I want to point out that all these calculations are only in software. The hardware is limited by what is physically present. How to systems deal with many simultaneously open programs? They use virtual memory (or swap space on linux). So, your example with Firefox is not technically correct in terms of addressing. If a program is actually taking up 4GB of RAM and you have more than that, the operating system will decide whether to keep the information in RAM, put it into virtual memory or write it out to disk and reload on demand.

 

Another thing that's hidden in the dark is that 64-bit software is more secure. Why? Most malware and malicious code is written for 32-bit software (i.e. Windows XP), and apparently writing code for 64-bit is a bit more difficult considering that you have to write to address 64 bits (integers) instead of 32 bits. Encryption, in this sense, will be more effective. Which brings some cons to the mix...

 

... that 64-bit software is also a bit more difficult to write. Also, since you are now programming in 64 bits (integers) instead of 32, 64-bit software can actually run slower than 32-bit software, since now your 64-bit software is having to address 64-bit address spaces (doubling your memory pointer requirement from 32-bit), requiring more memory AND space to deal with two times the data being worked with. (However, this can be a good thing, as the program is now able to address more, if you look at it from that perspective.) If the 32-bit software you are running now doesn't need to touch any numbers past 4 billion or cannot utilize this to its potential, then there will be no software advantage for that particular program to go 64-bit. Throw in the fact that 64-bit driver support is rather lacking, and you can see why the race to go 64-bit is at a crawl.


64-bit programs are not harder to write at all, with one caveat - those written in assembly language or really low-level C (which is almost the same as assembly). Assembly language is basically a human-readable version of the codes sent to CPUs to perform various operations. If a program is doing something called "pointer-magic", which is actually manipulating the pointer that contains a memory address, it will have to be careful to ensure it is taking both 32- and 64-bit memory space into account. However, for the vast majority of programs, it is simply a matter of compiling it with the correct options.

 

BTW, there is such a thing as designing programs for 64-bit processors. These are usually extremely computationally expensive programs that needs to squeeze out every available iota of performance. Examples would be SETI, mathematical libraries, the operating system kernel, etc. The technique would be to take advantage of longer address spaces by coding the low-level parts specifically for a certain architecture (AMD Barton CPUs, Intel Xeon CPUs, etc.).

 

I don't code or program personally so I wouldn't really know how difficult it is to "write" software for 64-bit.

 

This is how I understood this concept:

 

A 32-bit piece has up to 32 "slots" in which data can fill; same concept applies to 64-bit. 64-bit pieces can fit two full 32-bit pieces, but we know that not all 32-bit pieces are actually filled to the brim, so there's usually open slots left open. Therefore, 32-bit pieces can more efficiently fit more code without as much open-slot bloat. Maybe I've got that school of thought mixed up...

 

Your definition of heavy processing would be one of the main reasons why 64-bit processing trumps 32-bit processing, as I'm sure NASA and any other scientific applications, as well as working with large media files, would surpass the 4 billion integer mark that 32-bit is limited to. (But I'm sure they actually have custom software and hardware that takes advantage of 128-bit processing... yes, no?)

 

The supposed slowdown with native 32-bit applications turned 64-bit would, I think, come from the fact that you are now dealing with larger amounts of data at a time using the "larger" architecture that would have been more efficiently done using a 32-bit system, if we apply my thinking about the 32-bit/64-bit open slot/bloat concept. (Then again, I'm probably wrong on that whole concept to begin with.) Sending 64-bit pieces of data to RAM will fill up RAM twice as quickly than 32-bit (as it takes up a larger memory space), so although 64-bit can support the usage of more than 4GB of RAM, it may not use it as efficiently as 32-bit would? (i.e. with 1023 bits of RAM available, only 15 pieces of 64-bit data can be addressed to RAM as opposed to 31 pieces of 32-bit data, which makes up for 32-bit being a wee bit more than 3% more efficient)


I have never heard the slots idea before, so I am not sure what it is referring to. CPUs have registers that are a certain size, and operations exist to move data of different sizes (8, 16, 32, 64 bits, usually) into those registers. The reason 32-bit programs are slow is that since the program is already compiled for a 32-bit address space, the operating system has to convert all memory calls before retrieving the information from the RAM or hard disk. It may not seem like a lot, but it gives a pretty noticable performance hit. This is only true when running a 32-bit program on a 64-bit system, obviously.

 

For 32-bit programs compiled as 64-bit and then slowing down, this has to do with naivete of the programmer to a certain degree. Basically, things like accessing arrays, reading memory, etc. need to be aligned so that memory access is as sequential as possible. In addition, the compilers for 64-bit systems do not optimize as aggressivley as they do for 32-bit systems, though this is changing rapidly and will no longer be true very soon (if not already). Most reports of this type are very anecdotal and do not represent the fact that programs written to take advantage of 64-bit systems (CAD programs, graphics programs, video editors, etc.) run faster and get a nice boost from running on a 64-bit system vs a 32-bit one.

 

Hope this helps,

 

z.

Share this post


Link to post
Share on other sites

Thanks Z. Your post clears up a heck of a lot of concepts that I was probably interpreting incorrectly.

So in your first bit, what you're saying about applications is that they "always" address ~4GB of memory, constrained to the physical memory at hand? I suppose that if I tried to attempt a task that took ~4GB of memory, and I only had ~2GB, the application would work with the first 2GB chunk for the task at hand and after swapping is done and the data was process for that first half, the second 2GB chunk would go through, become swapped and processed, and then those two chunks will be assembled together (or go through any other additional processing) to create the resulting data? (I guess another example of how this works in my head is if I ran an image through a filter, Photoshop would work with as much image data as possible - let's say the top half of the image - process it, then continue by repeating the process with any remaining data?)

In the case of a 32-bit system, applications would be slower after reaching the ~4GB limitation because then the program would have to access virtual memory (page file from the hard disk), correct? If that's the case, I can see how the limitation of being able to address no more than ~4GB would be a tremendous performance hit for memory-heavy programs... but then again, outside of CAD, heavy audio/video processing, or any other memory-heavy programs (such as the latest games), I don't see anyone who would benefit from 64-bit capability.

Again, playing with this first bit, our company is thinking of moving from a 32-bit version of Microsoft Small Business Server 2003 to a 64-bit version. I am not familiar with the specifics, but with this concept that we've discussed, would a 64-bit operating system on the server even matter with basic tasks such as serving e-mail, hosting files, working with files on the server, etc.? (In the case of working with files, isn't a temporary version of the file created locally, which then taxes the local client with anything you do to the file before saving it back onto the server?) If I don't work at NASA, should I even be excited over any possible performance gains of a new server intended on serving e-mail and files?

I have never heard the slots idea before, so I am not sure what it is referring to. CPUs have registers that are a certain size, and operations exist to move data of different sizes (8, 16, 32, 64 bits, usually) into those registers. The reason 32-bit programs are slow is that since the program is already compiled for a 32-bit address space, the operating system has to convert all memory calls before retrieving the information from the RAM or hard disk. It may not seem like a lot, but it gives a pretty noticable performance hit. This is only true when running a 32-bit program on a 64-bit system, obviously.

You captured what I was trying to say with my slot concept... I was trying to simplify it for clarity, but apparently I confused instead. :) Your explanation works, as I assume that the memory call would be the 8/16/32/64-bit data clusters I was referring to when putting them into the registers (my slots).

Share this post


Link to post
Share on other sites

Thanks Z. Your post clears up a heck of a lot of concepts that I was probably interpreting incorrectly.
So in your first bit, what you're saying about applications is that they "always" address ~4GB of memory, constrained to the physical memory at hand? I suppose that if I tried to attempt a task that took ~4GB of memory, and I only had ~2GB, the application would work with the first 2GB chunk for the task at hand and after swapping is done and the data was process for that first half, the second 2GB chunk would go through, become swapped and processed, and then those two chunks will be assembled together (or go through any other additional processing) to create the resulting data? (I guess another example of how this works in my head is if I ran an image through a filter, Photoshop would work with as much image data as possible - let's say the top half of the image - process it, then continue by repeating the process with any remaining data?)


That's right. This is why a lot of data-heavy applications will be written so that they take a chunk of data (let's say a chunk equal to half of physical RAM), subject it to a whole bunch of transformations, then write out the final result before going to the next chunk. Prevents what is known as cache-thrashing or swap-thrashing - the main symptom is continual access to the harddrive while performing a computationally intensive operation.

In the case of a 32-bit system, applications would be slower after reaching the ~4GB limitation because then the program would have to access virtual memory (page file from the hard disk), correct? If that's the case, I can see how the limitation of being able to address no more than ~4GB would be a tremendous performance hit for memory-heavy programs... but then again, outside of CAD, heavy audio/video processing, or any other memory-heavy programs (such as the latest games), I don't see anyone who would benefit from 64-bit capability.

When using a 32-bit system, if you have a data file that is larger than 4GB, you would open up parts of the file at a time to work on. This requires more care, as you sometimes have to worry about offsets into the file, ensuring proper updates when the data in the file is changed, etc. (depends on programming language, platform, etc.).

The programs you mention are the ones that gain the most from 64-bit capability, except for games which are a mixed bag. Some games that are made for both 32 and 64-bit CPUs seem to do better at the 32-bit part (a lot of APIs are optimized to work with 32-bit / 4-byte data formats, so it makes sense). The reason to move to 64-bit CPUs is to future-proof yourself so you don't have to deal with the hassle when 64-bits versions of operating systems and applications are more advanced than their 32-bit counterparts. Also, people (me included) love owning the latest and greatest :).

Again, playing with this first bit, our company is thinking of moving from a 32-bit version of Microsoft Small Business Server 2003 to a 64-bit version. I am not familiar with the specifics, but with this concept that we've discussed, would a 64-bit operating system on the server even matter with basic tasks such as serving e-mail, hosting files, working with files on the server, etc.? (In the case of working with files, isn't a temporary version of the file created locally, which then taxes the local client with anything you do to the file before saving it back onto the server?) If I don't work at NASA, should I even be excited over any possible performance gains of a new server intended on serving e-mail and files?

If the move involves having to update hardware, then you will most likely see an improvement in speed (maybe not much, but still). 64-bit operating systems, as opposed to applications, have traditionally differed greatly from their 32-bit counterparts. In fact, for a long time (back when DEC was making Alpha chips in the 90's), 64-bit operating systems were required for running a "real" server. They handled multi-tasking better and were designed specifically for server environments as opposed to general computing, emphasizing multi-user environments, better file locking mechanisms, ability to stay running for months (years, in some cases) without a reboot, rock-solid stability, and so on.

The 64-bit version of MSBS 2003 will be able to address more memory. This, in and of itself, may provide a boost if that was ever a bottleneck in the past. Also, file serving will probably be a little faster, though by how much I couldn't imagine.

As for working with the files, it depends on what you are doing. You are correct in that most clients will work with a local copy for changes before a save is committed to the server. This is not true for really thin clients (ones that don't even have a hard drive and are logging directly into the server to work off of), but most companies do not use those.

I'd say you should be excited unless the company is laying people off in order to pay for the upgrades. Most importantly, if it is a public company or has to justify itself to private investors / venture capitalists, moving from 32-bit to 64-bit systems is a great way to throw a bunch of buzzwords at people who are expecting a return on their investment. This works for employees as well. While real gains may be nonexistent, it will at least ensure a good stock price and flow of money into the company, as well as a belief that the company is "heading in the right direction."

Regards,

z.

Share this post


Link to post
Share on other sites

Nice information in here. I wanted to move to Vista 64 bit...Or XP 64 bit, but my Wifi card isn't compatible with 64, :).I think there are still a lot of games that supposedly don't run on 64 bit but I thought anything 32 bit can still run on 64, just "emulated?" And if so, that would mean the virii and stuff could still run.

Share this post


Link to post
Share on other sites

If the move involves having to update hardware, then you will most likely see an improvement in speed (maybe not much, but still). 64-bit operating systems, as opposed to applications, have traditionally differed greatly from their 32-bit counterparts. In fact, for a long time (back when DEC was making Alpha chips in the 90's), 64-bit operating systems were required for running a "real" server. They handled multi-tasking better and were designed specifically for server environments as opposed to general computing, emphasizing multi-user environments, better file locking mechanisms, ability to stay running for months (years, in some cases) without a reboot, rock-solid stability, and so on.
The 64-bit version of MSBS 2003 will be able to address more memory. This, in and of itself, may provide a boost if that was ever a bottleneck in the past. Also, file serving will probably be a little faster, though by how much I couldn't imagine.

As for working with the files, it depends on what you are doing. You are correct in that most clients will work with a local copy for changes before a save is committed to the server. This is not true for really thin clients (ones that don't even have a hard drive and are logging directly into the server to work off of), but most companies do not use those.

I'd say you should be excited unless the company is laying people off in order to pay for the upgrades. Most importantly, if it is a public company or has to justify itself to private investors / venture capitalists, moving from 32-bit to 64-bit systems is a great way to throw a bunch of buzzwords at people who are expecting a return on their investment. This works for employees as well. While real gains may be nonexistent, it will at least ensure a good stock price and flow of money into the company, as well as a belief that the company is "heading in the right direction."

Regards,

z.


I was speaking strictly from a software perspective, although the changes will be involving an upgrade in hardware (a replacement server). My question still lies in any performance gain by going from 32-bit to 64-bit for basic server tasks... and I would hardly imagine that Exchange serving, file serving, and any other basic stuff would crush a system with 32-bit limitations and leave 64-bit an obvious performance-related upgrade (to not include the future-proofing factor). It's just a general wondering whether server-based basic tasks do require a 64-bit operating system to run efficiently.

I think there are still a lot of games that supposedly don't run on 64 bit but I thought anything 32 bit can still run on 64, just "emulated?" And if so, that would mean the virii and stuff could still run.

That's a great point, I think. You would think that 64-bit operating systems would be immune to 32-bit malware, but in my head, I'm thinking that malware could only affect the OS if you actually literally allowed for it to run, and if so, I wouldn't think that it would affect the targeted system as it would a native 32-bit system, since the code was maliciously designed for a 32-bit system (unless it was malware that involves an installation of false anti-virus software, which would run in 32-bit emulation). I could be wrong though... it probably wouldn't be able to implement the same hazards as if it were in a 32-bit environment, and even if it did, they wouldn't work in the same fashion, unless you [or the code] gave exclusive emulation to every bit of the Trojan/spyware/malware.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.