Jump to content
xisto Community
Sign in to follow this  
k_nitin_r

Hardware Virtualization Of Memory

Recommended Posts

I've been looking for a solution to get half a dozen old laptops to do duty as hosts for a virtual machine because they have too little RAM (4GB max, which is the minimum requirement for the app, but it's quite slow) to run an enterprise application on their own. I looked around and the only solution for combining the RAM of host computers to run a single virtual machine is for Unix/Linux only and requires Infiniband to hook up the computers together through low-latency links. Sounds fair enough, but Infiniband is only available for server-grade systems and costs much much more than a new tower server loaded with memory would cost. Virtualization can only cut down larger systems to smaller systems and not the other way around, sadly.

Share this post


Link to post
Share on other sites

Wow, Looks promising but not for Home Use... Some Info. from Wiki.

 

 

InfiniBand originated from the 1999 merger of two competing designs:
Future I/O, developed by Compaq, IBM, and Hewlett-Packard
Next Generation I/O (ngio), developed by Intel, Microsoft, and Sun

From the Compaq side, the roots of the technology derived from Tandem's ServerNet. For a short time before the group came up with a new name, InfiniBand was called System I/O.

InfiniBand was originally envisioned by the authors of its specification as a comprehensive "system area network" that would connect CPUs and provide all high speed I/O for "back-office" applications. In this role it would potentially replace just about every datacenter I/O standard including PCI, Fibre Channel, and various networks like Ethernet. Instead, all of the CPUs and peripherals would be connected into a single pan-datacenter switched InfiniBand fabric. This vision offered a number of advantages in addition to greater speed, not the least of which is that I/O workload would be largely lifted from computer and storage. In theory, this should make the construction of clusters much easier, and potentially less expensive, because more devices could be shared and they could be easily moved around as workloads shifted. Proponents of a less comprehensive vision saw InfiniBand as a pervasive, low latency, high bandwidth, low overhead interconnect for commercial datacenters, albeit one that might perhaps only connect servers and storage to each other, while leaving more local connections to other protocols and standards such as PCI.

Source: https://en.wikipedia.org/wiki/InfiniBand

 

 

Share this post


Link to post
Share on other sites
Infiniband seems like a great way to work with systems that need low latencies. One other feature is that if you think Infiniband is too slow for your needs, despite the 25 Gbps bandwidth, you can combine Infiniband interfaces to work together, just as you would use teaming for network interfaces. With 12 Infiniband ports, you can get 300 Gbps.
I wonder if we are likely to see terabits of data being transferred per second with the new technologies being introduced next year. Solid state disks have pushed storage data rates higher so you can do in a mobile server such as an Eurocom Panther what was previously only possible with full-sized servers. Quicker data transfers between components and nodes will enable a network built with multiple cheaper nodes and components, thus reducing costs.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.