Jump to content
xisto Community
Sign in to follow this  
rayzoredge

Is Osx/linux More Efficient With Hardware Than Windows?

Recommended Posts

I was talking to someone about Mac OSX the other day, and he was explaining why OSX was so much better than Windows thanks to the UNIX core. (I would imagine that OSX and Linux are pretty much interchangeable core-wise, so there's no comparison there.) One of the prime things that he was telling me about was that OSX directly "talks" to the hardware as opposed to utilizing drivers, which Windows primarily uses to communicate with hardware. Is this true? Do Tiger/Leopard/Snow Leopard as well as any Linux distribution make the most use out of the hardware you have thanks to this ability of direct communication?

Share this post


Link to post
Share on other sites

I wouldn't necessarily say that these Unix-like operating systems are pretty much interchangeable core-wise, but i think this can be better answered if you were to do research on the kernels that these operating systems use. The Mac OS X kernel is called XNU. The Linux kernel has no special name to it. Windows NT is the kernel used for Windows today. Further research on monolithic, microkernels, and hybrid kernels is recommended.

Share this post


Link to post
Share on other sites

I was talking to someone about Mac OSX the other day, and he was explaining why OSX was so much better than Windows thanks to the UNIX core. (I would imagine that OSX and Linux are pretty much interchangeable core-wise, so there's no comparison there.) One of the prime things that he was telling me about was that OSX directly "talks" to the hardware as opposed to utilizing drivers, which Windows primarily uses to communicate with hardware. Is this true? Do Tiger/Leopard/Snow Leopard as well as any Linux distribution make the most use out of the hardware you have thanks to this ability of direct communication?

Every 'POSIX' O/S i.e. Linux, Unix, Mac OSX etc. have a complimentary 'heart' if you like which is to be Berkley 5 compatible - a bit like Windows is x86 compatible, which is its biggest problem but thats another story...

'POSIX' have the ability to only load drivers which are needed at that moment in time and disguard them afterwards, unlike Windows which loads them all at boot time and keeps them (used or not) until you shut down. Also, with 'POSIX' systems, you have the ability to 'build' the kernal to yor own requirements and your specific system.

An example of this would be, you have a fully featured processor with every concievable option built in - and more importantly - wired, but you only need your system for power 3d artwork; so, you re-build the kernal so that it maximises everything to do with YOUR processor AND 3d graphics and leave out the rest such as multimedia extensions etc. In other words, you can remove the clutter to gain maximum performance for what YOU need your system for.

Share this post


Link to post
Share on other sites

@razoredge - The definition of efficiency would change according to your requirements.
If you were to be a gamer , windows would prove a better platform because of the above said indirect communications through drivers but If you were using some secure service or may be encoding music , then I think the Linux will be better.

Anyways.
Microkernels are in a way most efficient but have lesser control over the hardware --- I think OSx uses Microkernel, I am not sure about this

Monolithic kernels are like this huge core that every application needs to interact with if they are to use any of the hardware functions. They provide more control over the hardware but at the same time you can call them less efficient in some cases. Or may be they provide more of the security related features --- Linux uses this


and

Windows NT is a Hybrid kernel. Its somewhere between the two :D and Most efficient according to some people

use this link for more information about kernels...
http://en.wikipedia.org/wiki/Kernel_(computer_science)

Share this post


Link to post
Share on other sites

@truefusion: Thanks for the clarification through Wikipedia. I am now an expert on kernels. :D @georgebaileyster: I would imagine that a POSIX only exists on devices like an iPhone or a smartphone or PDA or something where manufacturers could afford to develop an operating system that tailors specifically to the device, unless an enthusiast or a coder would trudge through the toils of programming a personal operating system for a personal computer. However, looking on Wiki suggests otherwise... does this mean that as Leopard is considered a 100% POSIX-compliant OS that it is dynamic in what it loads for drivers for devices and components and leaves any extraneous junk out, as opposed to Windows by default which seems to have a large device library loaded and available to tailor to plug-and-play compatibility with no real hiccups? (Am I understanding this correctly?) Looking into the POSIX article already answers my question of whether Windows supports an ability to enable a POSIX environment, possibly to increase performance by purging unnecessary device drivers... but then again, I think I'm misinterpreting of what the POSIX standards are. @bluedragon: From what I'm reading, I'm understanding this in an opposite light. It seems that a monolithic kernel would be more efficient with the hardware presented because everything is interacted at a kernel level as opposed to going through two layers with a microkernel to get any messages to and from the hardware. Of course, any problems with the monolithic variant will cause issues at the kernel level, and I can see how stability is compromised, whereas microkernels address the issue by providing the layer buffer at a user mode to effectively sandbox the issue and keep the entire kernel (system) from crashing. (It's funny to me to know, however, that I've experienced more of a frequency for versions of Windows and earlier versions of OSX to lock up completely, as opposed to Ubuntu, which has yet to do the same.) I'm sure that OSX has come a long way from the 90's when I used to see the stupid bomb icon indicating a crash that, for the most part, I could never recover from, and I know that Windows has had its many issues, but both being hybrid microkernels in the attempt to benefit from the stability of a microkernel and the efficiency of a monolithic kernel structure, you would think that OSX and the Windows OS lines would be more stable, more efficient, and better than any Linux distribution. Would this be one of the reasons why the Tanebum/Torvald discussion came to be? With all of the reading that I've been doing, I'm rather confused as to why one operating system is favored over another. Theoretically-speaking, the Windows and OSX platforms with hybrid kernels should be better than the largely-monolithic kernel of a Linux distribution or the microkernels of other operating systems. However, the usage makes apparent that Linux seems to be on top of things with its kernel, not being restricted by a buffer as is with a hybrid kernel or a microkernel. Maybe I'm understanding a lot of this stuff incorrectly? If I'm not, why hasn't Linux come out of the woodwork to the consumer market as much as the Windows or OSX platforms? I know that marketing is a large factor, but if the concept of the hybrid kernel hasn't been perfected yet, why is Linux not announcing this sort of information to gather more of an audience? It was apparent that Vista was a resource hog, but with OSX based on the Mach kernel which apparently has some major performance issues concerning overhead, why isn't the Linux community spreading awareness about these shortcomings? Again, I would like to be corrected if I'm spewing out mumbo-jumbo. :D

Share this post


Link to post
Share on other sites

I have generally found that Linux and Macintosh use resources better than Windows. I have found out what those reasons are... 1. It has a Memory Manager that is not as efficient as the operating systems stated above.2. The registry... This was probably a feature when windows came out as it allowed software developers to more closely monitor use of their programs.3. Inefficient apps. Windows generally runs pretty well. The problem is the third party apps that are often not written well. 4. Un-necessarily extravagant GUI, Particularly in Vista.However on a software level their is alot of apps that run faster on Windows. 1. Adobe CS surprisingly runs faster on a Windows PC since the code is 64 bit vs. 32 bit. 2. Video Games mainly. Microsoft created directx which is rather efficient when it comes to shooting out polygons. On linux...1. You can compute things faster than the rest of consumer os's.2. It has an efficient GUI, with quite a bit of eyecandy (see compiz fusion)3. Linux is king with the internet, They can scale down to low-end netbooks and allow for a fast web browsing experience. Or it can be used to power high-end servers securely and fast.4. Its open source so you dont have to wait for the company to fix something as long as your a programmer. On Macintosh1. You get lots of apps that are simply unavailable on other os's. 2. You can compile most linux applications using programs such as fink. 3. However it is not as fast as linux. So efficiency means different things to everyone. So I will rank what each os. Is good for by category.If your a gamer.1. Windows is your best bet.2. Macintosh is in a distant second.3. LinuxIf you manage servers.1. Linux2. Macintosh3. WindowsIf you do creative work (image, video editing etc.)1. Macintosh2. Windows 3. LinuxIf you use an iPhone or Newest Generation iPod than your only choices that will allow you to work with that device are windows and macintosh. If you need a Unix shell go for Linux or Macintosh.If you just browse the internet1. Linux2. Macintosh3. WindowsIf you want a computer that just works without much maintence.1. Macintosh2. Windows3. LinuxI hope that this post would be helpful to you each os is good at its own thing.

Share this post


Link to post
Share on other sites

If you want a computer that just works without much maintence.1. Macintosh
2. Windows
3. Linux

This part is debatable. We no longer live in an era that runs off of source-based-only distributions. Sure there are distros like Gentoo, et cetera, but Linux is about choice—and it's better to know about your system anyway. On my Ubuntu system i don't worry about maintenance. In fact, that's why i stick to Ubuntu over all the other distros—things generally work out of the box, rarely, if at all, requiring me to go searching for drivers, et cetera; and all the drivers i've ever needed were in their repository anyway. Also, on a Windows machine you still have to worry about what anti-virus program and firewall you'll need to protect yourself with and the self-degrading performance of a Windows machine—that is, you have to defrag every now and then, which isn't entirely consistent in maintaining performance.

I've been testing out Jaunty, that is, Ubuntu 9.04, and they've really outdone themselves in performance with this version of Ubuntu. For an alpha 6 release it's been quite stable. But to modify the list, i'd place Linux in second place with Windows if not altogether switch Linux with Windows.

Share this post


Link to post
Share on other sites

This part is debatable. We no longer live in an era that runs off of source-based-only distributions. Sure there are distros like Gentoo, et cetera, but Linux is about choiceand it's better to know about your system anyway. On my Ubuntu system i don't worry about maintenance. In fact, that's why i stick to Ubuntu over all the other distrosthings generally work out of the box, rarely, if at all, requiring me to go searching for drivers, et cetera; and all the drivers i've ever needed were in their repository anyway. Also, on a Windows machine you still have to worry about what anti-virus program and firewall you'll need to protect yourself with and the self-degrading performance of a Windows machinethat is, you have to defrag every now and then, which isn't entirely consistent in maintaining performance.
I've been testing out Jaunty, that is, Ubuntu 9.04, and they've really outdone themselves in performance with this version of Ubuntu. For an alpha 6 release it's been quite stable. But to modify the list, i'd place Linux in second place with Windows if not altogether switch Linux with Windows.

Sorry I listed that wrong. Let me explain, Windows has a far lesser learning curve than Linux do to the simple fact that people know how to use it and it is used by 85 percent of people. Also linux is always getting better. I remember when I first tried ubuntu (version 6.04 I think) that was terrible. Then I tried 7.10 and liked it since. So the main reason that I put it below windows is because their is no way that I would be able to troubleshoot a linux problem with someone who is not good with computers, say over the phone.

Share this post


Link to post
Share on other sites

@truefusion: Thanks for the clarification through Wikipedia. I am now an expert on kernels. :P
@georgebaileyster: I would imagine that a POSIX only exists on devices like an iPhone or a smartphone or PDA or something where manufacturers could afford to develop an operating system that tailors specifically to the device, unless an enthusiast or a coder would trudge through the toils of programming a personal operating system for a personal computer. However, looking on Wiki suggests otherwise... does this mean that as Leopard is considered a 100% POSIX-compliant OS that it is dynamic in what it loads for drivers for devices and components and leaves any extraneous junk out, as opposed to Windows by default which seems to have a large device library loaded and available to tailor to plug-and-play compatibility with no real hiccups? (Am I understanding this correctly?) Looking into the POSIX article already answers my question of whether Windows supports an ability to enable a POSIX environment, possibly to increase performance by purging unnecessary device drivers... but then again, I think I'm misinterpreting of what the POSIX standards are.

@bluedragon: From what I'm reading, I'm understanding this in an opposite light. It seems that a monolithic kernel would be more efficient with the hardware presented because everything is interacted at a kernel level as opposed to going through two layers with a microkernel to get any messages to and from the hardware. Of course, any problems with the monolithic variant will cause issues at the kernel level, and I can see how stability is compromised, whereas microkernels address the issue by providing the layer buffer at a user mode to effectively sandbox the issue and keep the entire kernel (system) from crashing. (It's funny to me to know, however, that I've experienced more of a frequency for versions of Windows and earlier versions of OSX to lock up completely, as opposed to Ubuntu, which has yet to do the same.) I'm sure that OSX has come a long way from the 90's when I used to see the stupid bomb icon indicating a crash that, for the most part, I could never recover from, and I know that Windows has had its many issues, but both being hybrid microkernels in the attempt to benefit from the stability of a microkernel and the efficiency of a monolithic kernel structure, you would think that OSX and the Windows OS lines would be more stable, more efficient, and better than any Linux distribution. Would this be one of the reasons why the Tanebum/Torvald discussion came to be?

With all of the reading that I've been doing, I'm rather confused as to why one operating system is favored over another. Theoretically-speaking, the Windows and OSX platforms with hybrid kernels should be better than the largely-monolithic kernel of a Linux distribution or the microkernels of other operating systems. However, the usage makes apparent that Linux seems to be on top of things with its kernel, not being restricted by a buffer as is with a hybrid kernel or a microkernel. Maybe I'm understanding a lot of this stuff incorrectly? If I'm not, why hasn't Linux come out of the woodwork to the consumer market as much as the Windows or OSX platforms? I know that marketing is a large factor, but if the concept of the hybrid kernel hasn't been perfected yet, why is Linux not announcing this sort of information to gather more of an audience? It was apparent that Vista was a resource hog, but with OSX based on the Mach kernel which apparently has some major performance issues concerning overhead, why isn't the Linux community spreading awareness about these shortcomings?

Again, I would like to be corrected if I'm spewing out mumbo-jumbo. :D


Sorry I should have said POSIX = any Linux / Unix OS - Many different flavours and variants but basically Berkley 5 compatible on the core. Apples' OSX is indeed POSIX and can therefore through away the junk it doesn't need at that time and will 'call' when it is needed. As for 'plug and play' POSIX systems are now catching up fast as manufacturers are producing drivers in real time along side those 4 Windows, many now also feature auto install just like Windows and self detect network settings etc. However, there should be no reason why you cannot run OSX on a PC provided it isn't loaded with those 'windows only devices' problem just as you can run windows on an x86 MAC. :)

Share this post


Link to post
Share on other sites

However, there should be no reason why you cannot run OSX on a PC provided it isn't loaded with those 'windows only devices' problem just as you can run windows on an x86 MAC. :)

Actually, there are two big reasons why you cannot run Mac OS X on a PC (i.e. non-Apple computer): (1) the Apple devs designed Mac OS to run on only Apple computers, therefore requiring some hacking and installing drivers in order for it to run on a PC. (2) The license applied to the Mac OS, if i'm not mistaken, explicitly says you are not allowed to install it on non-Apple computers, therefore would make it illegal to install it on a PC.

Share this post


Link to post
Share on other sites

Actually, there are two big reasons why you cannot run Mac OS X on a PC (i.e. non-Apple computer): (1) the Apple devs designed Mac OS to run on only Apple computers, therefore requiring some hacking and installing drivers in order for it to run on a PC. (2) The license applied to the Mac OS, if i'm not mistaken, explicitly says you are not allowed to install it on non-Apple computers, therefore would make it illegal to install it on a PC.

Also the design of the os is to only work on efi intel, or open-firmware powerpc based computers running the apple firmware. When you install it on a PC through OSX86 you are emulating this and wont get nearly as much performance and once again, its illegal and unsupported.

Share this post


Link to post
Share on other sites

Sorry I should have said POSIX = any Linux / Unix OS - Many different flavours and variants but basically Berkley 5 compatible on the core. Apples' OSX is indeed POSIX and can therefore through away the junk it doesn't need at that time and will 'call' when it is needed. As for 'plug and play' POSIX systems are now catching up fast as manufacturers are producing drivers in real time along side those 4 Windows, many now also feature auto install just like Windows and self detect network settings etc.

So, correct me if I'm wrong here, but wouldn't it be beneficial to Microsoft to follow the POSIX method in only figuring out what drivers should be available for the present hardware and then purging the rest? For any further devices, wouldn't Windows be better off ONLY looking up drivers on an online database when the device is first plugged in so you wouldn't have to load every driver for every possible thing? Also, wouldn't it make sense to include some sort of ID on each device so that operating systems can tell what it is and install the driver software required from that device? (i.e. 1. Insert the device/component. 2. The computer, using an open source protocol, "asks" the device what it is. 3. The device returns a hardware ID tag along with the necessary driver to get it to work. 4. After the computer receives the ID tag, it identifies the hardware, then prompts the user if he or she wants to install the incoming driver from the device. 5. After the driver is installed, normal operability of the device begins.) With the previous suggestion, we do have the means to implement a small bit of information onto every hardware component on a small memory module on a chip, I'm sure... even on the smallest USB nano-receiver. I think that if that were the case, every single operating system could benefit from this sort of technological implementation.

 

Speaking of drivers, I think my question still stands on the kernel part of the discussion: is Linux the most efficient in working with its hardware, considering the fact that the monolithic kernel design enables the operating system to communicate directly with the hardware as opposed to working through a driver layer as Windows does? (I believe OSX does everything in userspace, relaying any hardware input/output through IPC, which tells me that there's still a layer to work through but with the advantage of multiple incoming and outgoing processes able to pass through the IPC layer at one time, and if I understand the kernel layout correctly, wouldn't OSX trump Linux in hardware efficiency, being able to manage sending output out as input comes in.)

Edited by rayzoredge (see edit history)

Share this post


Link to post
Share on other sites

Also, wouldn't it make sense to include some sort of ID on each device so that operating systems can tell what it is and install the driver software required from that device? (i.e. 1. Insert the device/component. 2. The computer, using an open source protocol, "asks" the device what it is. 3. The device returns a hardware ID tag along with the necessary driver to get it to work. 4. After the computer receives the ID tag, it identifies the hardware, then prompts the user if he or she wants to install the incoming driver from the device. 5. After the driver is installed, normal operability of the device begins.) With the previous suggestion, we do have the means to implement a small bit of information onto every hardware component on a small memory module on a chip, I'm sure... even on the smallest USB nano-receiver. I think that if that were the case, every single operating system could benefit from this sort of technological implementation.

This already happens. Most hardware has a vendor ID, which can be used basically the same way...

Speaking of drivers, I think my question still stands on the kernel part of the discussion: is Linux the most efficient in working with its hardware, considering the fact that the monolithic kernel design enables the operating system to communicate directly with the hardware as opposed to working through a driver layer as Windows does? (I believe OSX does everything in userspace, relaying any hardware input/output through IPC, which tells me that there's still a layer to work through but with the advantage of multiple incoming and outgoing processes able to pass through the IPC layer at one time, and if I understand the kernel layout correctly, wouldn't OSX trump Linux in hardware efficiency, being able to manage sending output out as input comes in.)

The only reason why OSX would trump Linux in hardware efficiency is because its designed specifically for the hardware. While linux is designed to work on most hardware configurations.

Share this post


Link to post
Share on other sites

man the replies for this post are big enough to scare anyonebut then also i will take some courage to reply to this..............i am using ubuntu for almost past 1 month and its really hard to install a new hardware such as pendrive or connect my own cellphone to the computer but in windows is easy and really plug and play but not in ubuntu as much as i lnow so i prefer windows to install a hardware and use it in windows only and really ubuntu is a bad idea for installing a hardware.................

Share this post


Link to post
Share on other sites

This already happens. Most hardware has a vendor ID, which can be used basically the same way...
The only reason why OSX would trump Linux in hardware efficiency is because its designed specifically for the hardware. While linux is designed to work on most hardware configurations.


Right... I figured that the whole driver process was already in existence, but if that was the case, wouldn't it [the device] include a "way" of communicating with the OS and thus have direct communication with Windows without having to jump through a driver [kernel] layer, like what Linux seems to do? I'm guessing this is a much more complicated deal than I'm making it out to be... or is it?

And as for Mac OSX, would it be safe to say that any non-Apple-approved upgrades, hardware, or devices would work much less efficiently than ones that already have been placed into an approved list of components? If that's the case, that would mean that OSX would have a device list... which means that it doesn't differ much from Windows on that aspect except that it would be a much shorter list. But if the whole vendor ID concept exists with what I mentioned in my previous post, why would Windows be different than OSX in terms of the efficiency in utilizing hardware if both have a device list that they work off of to ensure compatibility with devices? (I can see why Windows would boot up slower than OSX as Apple would have a much more refined list of hardware that have been "approved" for OSX... but this leaves the efficiency question.) I'm not really saying that since both operating systems utilizing a modified microkernel that they must be the same... but in the way that I'm understanding how both of them work with hardware, there doesn't seem to be much difference, or at least there's a way to change things so that there isn't much of a difference on the hardware management on the kernel side of things.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.