Jump to content
xisto Community
Ahsaniqbalkmc

Maintaining A Server With Two Different Computers

Recommended Posts

Almost all web-developers like their websites to be hosted by professional hosting companies that provide high end services. But some also like some additional space where they can experiment, test new stuff, and improve their skills. For this they prefer something that may sacrifice some stability and other high end features but be easy on the pocket.One solution for this is hosting your own web server. Most people nowadays have a spare computer and internet connections are present everywhere. In addition, server hosting operating systems are also very easy to get. The learning curve is also not very steep and one can easily learn the basics within a few days. So, hosting your own web server has no become quite easy, simple and affordable.However, personal computers are not designed for server uses. In addition to many connectivity limitations (that can be overlooked in many testing environments), personal computers are not designed to run continuously 24x7. They need some cooling down after a couple of days to make sure the parts keep performing and the life span is not reduced. To overcome this issue, I have an idea in mind but I don't have any technical skill to convert this idea into reality. (Maybe the idea I am going to present is already tested and has been unsuccessful or successful).The idea is that instead of one computer, why not use two computers alternatively. Most modern computers have the ability to be automatically turned on and off at a specified time (from bios or use of software). So we can create a system where one computer is on at a specific time but after a certain period of time (say 24 hours) it turns off and the other computer turns on. This would make sure that there is always a computer on to maintain the server, at the same time providing proper (in fact more than proper) cooling time for both computers.The problem with this idea is that how would you make the server settings so that it can run when either of the computer is on. The computers will be connected to the same network but both would have different instances of the OS and the server software running. I hope this is possible and quite easy to do but I really don't have any idea how to do this.If this can be done, then people like me who have spare p4 desktops lying around and a reasonable internet connection can create a handy testing environment to test and improve their skills.

Share this post


Link to post
Share on other sites

You can start with a minimum work to be done everyday, and then you can manage to make it automated.First of all, each computer is alone on the network.So, both computers can have the same network name. Clients browsers will not be able to detect if things are on the first or on the second computer.Secondly, the applications have te be installed in the same folder names (let's say on d:/myservice/mywebheme for instance).Then you need, before going to bed, to do the following :backup your current computer on a USB flashdisk, including the database backup.poweroff your "day" computer.Poweron your "night" computer.Restore the database backups, restore the files backup.start your webapps.Learn how to do that manually first.Then try to automatize the backup/restore thing.Then you can use a third computer as a fileserver for the backup as well as for the restore.When everything works fine, create the full startup automated marvellous script which does the same job.Of course, another better way of doing that is learning how Linux cluster work.Then put all your data on a shared external disk.And then everything will be easy, very automated because the cluster facilities will be used inside the crontab files, and you will have nothing to do personally.This should work with any, old or new, computers, provided that you learn how to install and use cluster services and provided that you learn how to install an dual-attachment disk.

Share this post


Link to post
Share on other sites

You can start with a minimum work to be done everyday, and then you can manage to make it automated.First of all, each computer is alone on the network.
So, both computers can have the same network name. Clients browsers will not be able to detect if things are on the first or on the second computer.
Secondly, the applications have te be installed in the same folder names (let's say on d:/myservice/mywebheme for instance).

Make this a bit more simple for me. Suppose I install ubuntu server on both of my desktops, giving them the same name during installation. Would this do the job for me. My assumption is that the ubuntu server comes preinstalled with all the necessary software and settings required to run a server. But I can't say anything for sure about it as I have never used it myself. But if all the software and settings are pre-installed, they must be on the same location on both the computers, as this would be the default configuration.
Would this setup work?

In addition, please guide me on what configuration need to be made on ununtu server operating system, before I can run a server on it.

Then you need, before going to bed, to do the following :backup your current computer on a USB flashdisk, including the database backup.
poweroff your "day" computer.
Poweron your "night" computer.
Restore the database backups, restore the files backup.
start your webapps.

Manual work like above is fine for education but in the long run it can be quite a pain. But as you have mentioned alternate ways (see quotes below), I am pretty sure that this won't be a problem. I have another suggestion. Isn't there any solution such as a hard disk which can be used by two computers simultaneously or alternatively. For instance, if I am able to use one hard on both computers, I won't need to install two instances of the OS and other applications to make the server run on both computers. Instead I will just install on instance on the common hard disk and I will be good to go. In this case, the hard disk will have to remain powered on all the time but I think with proper ventilation, a good hard disk can manage that for a good amount of time.
This setup (where only one hard disk is requrid) would be a bit different than an external hard disk scenario, where a total of three hard disks are required (two internal and one external). I have tried to do a quick search on google but I didn't find anything. I hope someone else have some knowledge on this.

Learn how to do that manually first.Then try to automatize the backup/restore thing.
Then you can use a third computer as a fileserver for the backup as well as for the restore.
When everything works fine, create the full startup automated marvellous script which does the same job.

I didn't like the third computer thing as it would get more costly and more room demanding. Desktops usually can support a number of hard drives with huge capacities and as the project is experimental, I think there won't be much need of great storage space. However, I don't know how fileservers work and I would love to know about them. Is a separate processing machine necessary for file servers. My idea about servers that store huge number of files (and provide download/upload facility) was that the only extra thing needed by them is storage space. Can you tell me more about fileservers?

Of course, another better way of doing that is learning how Linux cluster work.Then put all your data on a shared external disk.
And then everything will be easy, very automated because the cluster facilities will be used inside the crontab files, and you will have nothing to do personally.
This should work with any, old or new, computers, provided that you learn how to install and use cluster services and provided that you learn how to install an dual-attachment disk.

Now this is something I have never heard before. I tried to find some info on what is a linux cluster but the all I could find passed over my head. Can you ease the concept on me and tell what is a linux cluster and what does it do?
And the subject of dual-attachment disks is also new to me but I can find enough info for this to read and understand.

Share this post


Link to post
Share on other sites

OK, I was wrong on several points, so I have to change my mind consequently.

I thought that you already have a system already working, and you installed that system by yourself.

In that case, you should be able to install the operating system on the second machine, exactly the same way.

The "dual attachment" thing is what you name external disk. If an external disk is connected to a computer, this is a single attachment. If the disk is connected to two computers, this is a dual-attachment organization.

Concerning the clustering system, this is a concept where a standby machine automatically replaces the production server in case of crash. This is usually done in order to have a website or another application continue working in case of hardware failure of one component, each vital component (like a computer) is duplicated so that in case of loss of a component the other part automatically takes it's place.

Rather few people are able to master this kind of threefold architecture, but it's explained in rather simple terms in wikipedia, see here :

https://en.wikipedia.org/wiki/Computer_cluster

Mostly pay attention to the Linux-HA project.

"High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.

I think the Linux-Ha project would be the most interesting one in your case, because I guess that you don't want to pay an expensive licence for another commercial very professional environment. However you will have to learn how to implement a Linux-HA cluster. Such is life : if you don't have a lot of money in order to by professional implementation services from a Unix security expert, you will have to do it yourself and spend a lot of time learning.

What I would suggest is starting simple : take two virtual machines on your computer, install the operation system, add a third disk accessed by both virtual machines, create a filesystem on the first machine, switch the disk to the second machine and access the filesystem, then install the clustering environment, configure the resources, and test the failover.

What I love when working with virtual machines, is that you add virtual hardware as long as you needed. No problem for having three Ethernet adapters and two separated network, no problem having shared disks with dual access. You can easily add a virtual Ethernet adapter if needed, or destroy an adapter in order to see if you really need it. And at any moment you can backup both machines, then destroy everything in order to test something different, it costs nothing except time. And once you were repeatedly able to create/install/configure the two systems, you can switch to physical systems.

Share this post


Link to post
Share on other sites

In that case, you should be able to install the operating system on the second machine, exactly the same way.

 

So I got this on right........ nice!

 

The "dual attachment" thing is what you name external disk. If an external disk is connected to a computer, this is a single attachment. If the disk is connected to two computers, this is a dual-attachment organization.

 

Upone first impression, the dual attachment disk looked to be a relatively common thing, but I am afraid I wasn't able to find any info on it. Do such disks exist (can you provide a reference??). All I could find was NAS (Network Attached Storage) system, where a proper setup is made for a hard-disk to be attached to the network. Thus it can be used by all the computers on the network simultaneously.

 

Concerning the clustering system, this is a concept where a standby machine automatically replaces the production server in case of crash. This is usually done in order to have a website or another application continue working in case of hardware failure of one component, each vital component (like a computer) is duplicated so that in case of loss of a component the other part automatically takes it's place.

Rather few people are able to master this kind of threefold architecture, but it's explained in rather simple terms in wikipedia, see here :

http://forums.xisto.com/no_longer_exists/

Mostly pay attention to the Linux-HA project.

 

That was really helpful, and at least now I have a general idea of what clustering system is. A special thanks for this one.

 

What was amusing to read was that only few people are able to master it. This must be a really hard concept to understand, I guess........

 

I think the Linux-Ha project would be the most interesting one in your case, because I guess that you don't want to pay an expensive licence for another commercial very professional environment. However you will have to learn how to implement a Linux-HA cluster. Such is life : if you don't have a lot of money in order to by professional implementation services from a Unix security expert, you will have to do it yourself and spend a lot of time learning.

What I would suggest is starting simple : take two virtual machines on your computer, install the operation system, add a third disk accessed by both virtual machines, create a filesystem on the first machine, switch the disk to the second machine and access the filesystem, then install the clustering environment, configure the resources, and test the failover.

 

I think I should save the clustering stuff for some other time because first I need to get my hands on the basic concepts of how server works.

 

What I love when working with virtual machines, is that you add virtual hardware as long as you needed. No problem for having three Ethernet adapters and two separated network, no problem having shared disks with dual access. You can easily add a virtual Ethernet adapter if needed, or destroy an adapter in order to see if you really need it. And at any moment you can backup both machines, then destroy everything in order to test something different, it costs nothing except time. And once you were repeatedly able to create/install/configure the two systems, you can switch to physical systems.

 

I agree!. Virtual machines are a really great feature for learners and those who conduct experiments. It saves a huge amount of time and resources.

Share this post


Link to post
Share on other sites

What was amusing to read was that only few people are able to master it. This must be a really hard concept to understand, I guess........

fortunately, there are still some places where very skilled people have their professional place. How could I have a well-paid job if everyone is able to easily porform my actions?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.