Being market leaders in the Hyper converged computing space, Nutanix leverages hyper-convergence, software-defined intelligence, distributed autonomous systems, and incremental, linear scale out.

Hyper-convergence

Hyper-convergence is an architecture that natively combines compute, storage, and the hypervisor into a single appliance. By natively combining compute and storage data locality, for application performance, and incremental scale-out, for fractional cloud consumption, are achieved. Each Nutanix appliance is a 2U server, populated with either one, two, or four “nodes” each. Each node is a single host which when clustered together create a performant, resilient, web-scale compute platform.

Software-Defined Intelligence

The term “software-defined intelligence” is about pulling core logic from functions such as RAID, deduplication, and compression from proprietary hardware, and placing that into software which is run independently on each node or host, but still forming part of an overall cluster. This enables rapid release cycles, better utilisation of hardware for cloud economics, and lifespan protection; the software cluster becomes the point of longevity; the hardware underneath can change through incremented retirement and refresh cycles as oppose to large, disruptive, “rip-andreplace” 5-yearly cycles.

Distributed Autonomous Systems

“Distributed autonomous systems” is about creating a masterless design, as opposed to the usual approach of having a single unit responsible for doing something. This distribution makes the system architected to accommodate and remediate disk failure. Vanquish Technologies can assist you in the design of a system using either n+1 or n+2 architecture. A node failure, or component failure can be remediated while maintaining operations. This distributed architecture also eliminates bottlenecks within the system.

Incremental and Linear Scale-Out

The three-tier architecture of hypervisor, fabric interconnect, storage controller, and disk are eliminated through Nutanix’s hyper-converged architecture. This means that a scale out operation is a simple addition of a compute node with direct attached storage. After the node is seated, the file system is written across and resources are rebalanced cluster-wide, all during production time. Users can leverage an incremented scale-out model in line with their requirements and deploy additional cluster resources quickly as projects arise.