I spent the first decade of my career doing managed and
professional IT services around SAN and NAS for EMC, and I remember rigorously
checking the EMC compatibility matrix to ensure an environment was ready to go
before it was even built in the datacenter. But, did that actually guarantee no
issues?
Of course not. There were still plenty of support calls
filed—from lack of consistency in the environment, to firmware issues, to
independent hardware failures that still incurred faults in other parts of the
solution. Part of a project sign-off involved getting a HEAT report, a scripted
check against the EMC support matrix, that didn’t show any mismatches or
configuration issues. Then came E-lab advisor and many other iterations trying
to solve the interoperability problem, but they were fundamentally unable to
outpace the exponential growth of an HCL for a best-of-breed approach. Opposite
this perspective, you have the undeniable acceleration of public cloud providers
where you only pay for a virtual form
factor. The underlying hardware is (and should be) irrelevant to what you, the
customer, concentrates on—the software you want to build.
Customers have an abundance of software stacks to deliver,
from traditional web/app/database platforms to more loosely coupled platform
components designed for rapid iteration. The
expectation of quick and constant evolution in any given constituent component
at any given time is, in my opinion, the defining characteristic of the next
generation of app environments, or “cloud-native apps”. For a far more rigorous rubric and
definition, see http://12factor.net/. I’ve
seen firsthand with Hadoop and HPC environments as customers evaluate
virtualization and try to decide whether to go with a siloed bare-metal
approach, internal virtualization, or a service provider.
If you take the evolution of Hadoop with regards to Big Data for example, traditionally product
management, marketing or R&D business units would provide input for a data
warehouse with arbitrary expectations set a year or two in the future, and the
DBAs would design for that without the same stepping-model insight that you
only get with experience. Compare that to HPC programmers, who may be building
and tuning code for hardware that hasn’t even hit a datacenter floor yet,
trying to optimize compilers for potentially theoretical working sets and
hardware-accelerated solutions. In HPC and Hadoop, it has been very exciting to
witness a shift in perspective. Customers are able to learn and scale their approach constantly. This gives them more options
to experiment and grow along the way because their business goals and technical roadblocks are always evolving as
well.
Nutanix aims to give these environment owners more time to
focus on their specialty and less on infrastructure as more than Yet-Another-Hyperconverged-Vendor
by:
·
A distributed management layer across the
cluster for resiliency and durability of meta-data. This also becomes the
distributed endpoint for API calls and stacking of higher-level services. A quickly changing environment means a lot
of API interaction, so this by necessity is fault-tolerant and without
bottlenecks.
·
A distributed logical storage space for
performance, availability, and durability. At
the same time the storage pool is a singular abstraction for transient and persistent
data across any VMs, containers, or applications (or app-building platform).
While simplifying the management and storage layers, customers
are allowed to choose:
·
Their virtualization hypervisor and tooling
available.
Holistically, the Nutanix platform is designed to support
all of these ideals to minimize bespoke
architectural designs and provide
straightforward manageability and scalability. In the next of this series of blog posts I will review deploying Pivotal Cloud Foundry on Nutanix, here:
http://virtual-hiking.blogspot.com/2015/09/cloud-foundry-setup-on-nutanix.html
http://virtual-hiking.blogspot.com/2015/09/cloud-foundry-setup-on-nutanix.html
No comments:
Post a Comment