the NodeWeaver model
a sea of data
Virtualization solved many problems, transforming servers into software objects that are easily movable to new platforms. But data was left behind- and with it, all the complexities of managing things in a traditional way. Traditional virtualization still requires to think in terms of phyisical units, LUNs, managing each object as a separate entity. It was the most obvious way of doing it-it was the standard.
But it’s not the best way.
Data flows like water, and by taking advantage of this idea we built NodeWeaver as a “sea of data”, where each new node brings with it some capacity of holding it. And if something happens, data is free to flow to where it can be contained and managed properly. It flows on its own - for the first time, the user is free to care about what is really important, and not whether a disk has enough space for a virtual machine image or where to store backups.
As it flows freely, data is adapting and moving to where it is needed most; there is no need for DRS or other acronyms to optimally allocate resources, the resources will flow to where they are more needed. To help the user in expressing their requirements, the NodeWeaver sea of data is partitioned in dynamically allocated datastores, that provide a way to express preferences for use and placement of data. For example, for virtual machines that require absolute performance, it is possible to ask the VM to be placed on a SSD datastore. If there is space available in the internal SSD disks, the chunks that compose the virtual machine image will be placed in SSD if possible, and if there's no space another datastore will be used and the administrator will be warned of the suboptimal situation. In a similar way, images or snapshots can be placed in the "cold storage" datastore, and moved to slower and larger disks in a totally transparent way.
We extended this idea to all NodeWeaver management: let the system do what is more useful for the user, freeing time for what is really important. If a machine fails, it is transparently re-instantiated in another node. If a disk fails, it is silently replicated to where there is available space. Any situation has an automated answer.