Convergence Without Compromise

Hyperconverged Infrastructure (HCI) gets a lot of attention these days, and rightly so. With HCI we’ve seen a move towards an easy-to-use, pay-as-you-grow approach to the datacenter that was previously missing. Complex storage array that required you to purchase all your capacity up front is what I started with in my career. While expansion of these storage arrays was possible, often times we were buying all the storage we’d need for 3-5 years even though we wouldn’t be consuming it for multiple years.

While HCI certainly made things easier it was far from perfect. Mixing storage and compute nodes into a single server meant maintenance operations needed to account for both available compute resources as well as available storage capacity to accomodate offline storage. At times we would actually sacrifice our data protection scheme in order to takes nodes offline and hope there were no additional failures within a cluster at the same time. Not ideal when we’re talking about production storage.

Get Down with the DVX

Datrium and the DVX platform aim to address these problems in an interesting way. Datrium separates storage and compute nodes much like traditional two tier system, but utilizes SSDs inside each of the hosts to act as a read cache. By moving the cache into the host we’re able to increase performance with every host we add. This decoupling of cache from the storage layer means we’re not queuing up reads at a storage array that is trying to satisfy the requests of all the connected hosts over the same connected switches. While this sounds very similar to previous technologies we’ve seen before (Infinio and PernixData come to mind), the differentiator is the storage awareness.

The Datrium DVX solution utilizes their own storage nodes for the persistent storage piece. With the caching and storage being fully aware of each other, Datrium is able to offer end-to-end encryption from the hypervisor down to the persistent storage while still being able to take advantage of deduplication and compression. Often times encrypting data at the storage array level means we are forced to give up these data efficiencies, but not in the case of Datrium. We get an additional level of data security without having to make any compromises.

No Knobs, No Problems

HCI vendors have really pushed the configuration abilities within their systems. Customers can choose what data is deduplicated and compressed, whether or not it should be encrypted, how many copies of their data should be kept, and is erasure coding a better choice than traditional RAID just to name a few. This is where Datrium separates itself from its HCI competitors. Disaggregating compute nodes from the persistent storage layer, Datrium’s DVX system manages to deliver performance and features without penalty. Once again, no compromises.

Erasure coding, dedupe and compression, double-device failure protection, data encryption; every one of these features is always on and doesn’t require any separate licensing or configuration. The advantage here isn’t just in administrative overheard, but also in performance. Datrium’s performance numbers are based on each one of these features enabled. No tricks. No Gimmicks. What you see is what you get; unlike many of their competitors that hide behind unrealistic configurations many of these features being disabled.

3 Tiers, 1 Solution

Datrium aims to bring together a Tier 1 HCI-like solution, combined with scale-out backup storage and Cloud-based DR all in the same system. With integrated snapshots that utilize VMware snapshots as well as VSS integration, they are able to perform crash consistent and application consistent snapshots of virtual machines right on the box. This, of course, is table stakes when it comes to modern storage arrays. The differentiator is that Datrium is able to do this at the VM-level despite presenting NFS to the virtual hosts. Now we’re not just backing up all the VMs that live in a LUN or volume, we’re able to get as granular as the virtual disk itself. No VVOLs required.

Adding another level of visibility into the mix, Datrium reports its latency at the individual Virtual Machine level instead of at the storage array. Traditional storage array vendors talk about their ultra-low latency, but this reported latency is what the array is seeing not taking into account the latency imposed by virtual hosts and switching infrastructure. With each different component in the virtual infrastructure having its own queues, varying utilization and available bandwidth, the latency a Virtual Machine experiences is much greater than what the array is reporting. Datrium is offering this full visibility at the individual Virtual machine level so you know how your environment is actually performing. Dr. Traylor from The Math Citadel has an excellent overview of queuing theory, Little’s Law, and the math behind it.

The cloud-based integrations also allows for an additional level of data availability. Instead of requiring an additional backup software, Datrium allows for replication of your data to a DVX running in the cloud. Now we have an offsite copy of your data ready to be restored in the event of VM corruption or deletion. Replication is also dedupe-aware, meaning data isn’t being sent to the cloud if it is already present helping to minimize bandwidth requirements and speeding up the replication process.

Cloudy Skies Ahead

While I am very reluctant to trust one solution with my primary and backup data, in certain situations I can see the advantages. Integrations with AWS allowing for virtual machines to be restored from the Cloud-based DVX means your DR site can now be in AWS. Datrium has lowered the barrier to the cloud for a lot of customers with the features they’ve included in the DVX platform.

Datrium continues to make a good product even better. The additional features available in version 4.0 of DVX make this not only a great fit for SMB customers, but enterprises as well. A feature-rich, no-knobs approach to enterprise storage with backup and DR-capabilities all rolled into one. Datrium is definitely worth a look.

________________________________________

Disclaimer: During Storage Field Day 15, my expenses (flight, hotel, transportation) were paid for by Gestalt IT. I am under no obligation by Gestalt IT or Datrium to write about any of the presented content nor am I compensated for such writing.

Convergence Without Compromise

The Challenge of Scale

Working in the SMB space for the majority of my career meant rarely worrying about hitting scale limits in the hardware and software I was responsible for. A few years ago, the idea of managing a data footprint of 20-30TB was huge for me. I didn’t have the data storage requirements, I didn’t have the number of virtual machines, I didn’t face struggles of scale. As I moved into the enterprise that scale went up massively. 20-30TB quickly became multiple petabytes. The struggles you face at the enterprise-level are much different.

While listening to James Cowling from Dropbox present on their “Magic Pocket” storage system, he said something that really put their scale into perspective. Building a storage system of 30 petabytes was referred to as a “toy system.” As they explored the possibility of moving users’ data out of Amazon and into their datacenters, a storage system needed to meet their ever-increasing storage demands. Storage software capable of managing 30PB was easier to come by then software capable of managing 500PB. When building this homegrown solution to hold all the file content for its users, theirs was a challenge few others have had to face. With that much data being hosted in AWS there was no off-the-shelf product capable of managing this scale.

While the move from AWS to on-premises sounds simple, issues like scale are just the tip of the iceberg. Dropbox didn’t just need to write a massively scalable filesystem, work hand-in-hand with hardware vendors to find the right design, determine the best way to migrate their data to their datacenters, ensure data integrity, and validate every aspect throughout this entire process, but they needed the time to do all of this right the first time. When your job is content storage and collaboration, “losing” data isn’t an option. Having confidence in your solution and management granting the autonomy necessary to “reset the clock” if and when bugs were found is the only way this move was going to be successful.

And what prompted the decision to move out of AWS’s S3 storage? Cost. To the tune of nearly $75 million in operating expenses over the 2 years since getting out of AWS. Storage is cheap and getting cheaper, but storage at scale is an expensive endeavor. While the cost savings is signficant, the performance gain was significant as well. Dropbox saw a dramatic performance increase by bringing data into their datacenters and using their new storage system. This is just a reminder that the real cost of “cloud” is often much higher than companies expect.

Back to the issue of scale. Storage wasn’t the only issue they faced. Now with over 1 exabyte of storage and growing at a rate of nearly 10PB per month, they also faced an issue of bandwidth. Dropbox sees around 2Tb of data moving in and out of its datacenters per second. PER SECOND. With that kind of demand, minimizing traffic and chatter inside their network is important as well. Events such as disk, switch, or power failures shouldn’t be creating additional rebuild traffic inside the network impacting disk and network performance. The Dropbox datacenter monitoring solution is just as advanced as the storage system; capable of analyzing the impact of any such failures in the datacenter and triggering rebuilds and redistribution only when necessary. There is a balance of network versus disk cost when it comes to how and where to rebuild that data.

Designing a highly availability, redundant, always-on infrastructure looks different depending on your scale. Application-level redundancy, storage-level redundancy, combined with a robust monitoring solution are just a few of the techniques Dropbox has utilized to ensure application and data availability. The Dropbox approach may not be common, but was necessary for long term success. Sometimes the only way to reach your goals is to think outside the box.

________________________________________

Disclaimer: During Storage Field Day 15, my expenses (flight, hotel, transportation) were paid for by Gestalt IT. Dropbox provided each delegate with a small gift (sticker, notepad, coffee), but I am under no obligation to write about any of the presented content nor am I compensated for such writing.

The Challenge of Scale

Cohesity – DataPlatform in the Cloud

cohesityWhat separates vendors is focus and execution. In a crowded market, finding the right backup provider is no easy task. While each product has its pros and cons, finding the differentiator can be a daunting task. While Cohesity is relatively new to this space (founded in 2013), they have that focus and execution necessary to be a leader in the backup space.

But Cohesity is more than just backups. The Cohesity storage appliance not only handles your backup storage needs, but can also run your dev and test workloads. Cohesity is focused on your secondary storage needs. That secondary storage consists of any workloads or data that isn’t production. By avoiding the draw of being another primary storage vendor, Cohesity is listening to customers, learning their needs and creating a solution that can fit any size business.

storageiceberg

The Cohesity solution was built for a virtualized (VMware-only) environment. Connecting directly to your vCenter servers and pulling your inventory allowing administrators to create backup jobs and policies. While their start was in virtualization, there are still many physical workloads in the datacenter. Creating agents for physical Windows, Linux, and SQL server all backing up to the same storage system and with the same policies prove no workloads can’t be protected by Cohesity.

But wait, there’s more!

While data protection is important, that’s only a small portion of the Cohesity offering. Running these backups directly from the Cohesity storage arrays allows you to free up primary storage resources and (potential) bottlenecks when running multiple instances of the same VM on a single array. Leveraging the SSDs that come in each Cohesity node as a cache tier, testing software patches and deployments from your backed up production VMs means that your performance doesn’t suffer. And with a built in QoS engine your dev/test workloads don’t have to affect the speed of your backups.

Cohesity provides a scale-out solution, meaning as storage demand increases so can your secondary storage space. Operating under a single namespace, as new nodes are added, your space increases without needing to reconfigure jobs to point to a new array or manually re-striping data. Cohesity has customers that have scaled up to as much as 60 nodes with over a petabyte of storage.

To the cloud!

Policy-based backups and replication ensures that your data will be available. Cohesity has the ability to distribute data across the nodes in a cluster, replicate to clusters in another locations, and also replicate your data to a cloud provider in order to satisfy offsite backup requirements. The latest addition to the Cohesity software portfolio is the DataPlatform Cloud Edition. This gives you the ability to run Cohesity in the cloud.

DataPlatform CE is more than just replicating data to the cloud. Your VMs can be backed up to your on-premises cluster and that data can be replicated to your cloud-based array. From that cloud-based array, you can then clone virtual machines to a native cloud format. This means your servers can be run in the cloud in their native format and available to test or even run in the event of migrations or datacenter outages.

Many backup and data protection software vendors are doing replication to the cloud such as Veeam and Zerto. While the features isn’t new, its addition makes Cohesity a serious contender in this space. DataPlatform CE is available currently in tech preview in the Microsoft Azure Marketplace, but Cohesity hopes to release it in the first half of 2017 with support for Azure as well as AWS.

Wrapping Up

Data protection and availability is never going to be exciting. Swapping tapes and deploying agents is tedious work. A fully integrated software solution that not only protects your data, but also helps solve the problem of data sprawl, a platform for developers to test against production data in an isolated environment and the ability to migrate workloads to the cloud. That’s about as exciting as it gets in data protection and that is just the tip of the (storage) iceberg.

________________________________________

Take a look at posts by my fellow delegates from Tech Field Day 12 and watch the videos here.

First Look at Cohesity Cloud Edition
The Silent Threat of Dark Data
Cohesity Provides All of Your Secondary Storage Needs
Secondary Storage is Cohesity’s Primary Goal

________________________________________

Disclaimer: During Tech Field Day 12, my expenses (flight, hotel, transportation) were paid for by Gestalt IT. Cohesity provided each delegate with a gift bag, but I am under no obligation to write about any of the presented content nor am I compensated for such writing.

Cohesity – DataPlatform in the Cloud

The Beginning of Cloud Natives

Over the last 8 years I have built my career around VMware. I remember the first time I installed VMware Server at one of my jobs just to play around with and imported my first virtual machine. I had no idea what I was doing or how any of it worked, but I felt there was a future for me in this technology. As I moved on to other companies, the VMware implementations just got larger and larger; from 3 hosts all the way up to well over 1000.

Having spent time in these environments and with other users at local VMUG events and VMworld, I’ve seen that the skills required to be a VMware administrator are becoming commoditized. More people know about it than ever before, more blogs exist than ever before, and the necessity of meetings that revolve around VMware specifically seems to have run its course. While VMware remains integral to the datacenter today, there are skills we need to be developing and technologies we need to be exploring to ensure we’re not the ones being replaced when the next generation joins the workforce.

Enter Cloud Natives.
cloud natives

Cloud Natives was the idea of Dominic Rivera and myself as a means to bridge the gap between user and these new technologies. Cloud Natives looks to bring together the leaders in a technology space to present their solutions in one location. Rather than just letting vendors spew marketing material,  we take a different approach. Vendors are required to provide actual customers to present how their solutions have impacted their job and their business. No more outlandish claims, no more vanity numbers that don’t depict actual workloads, just real stories from real users.

We are kicking off 2016 with our first event on July 14th in Portland, OR. This event will be focused on one of the hottest technologies in the datacenter right now: Flash Storage. We’re bringing together the top players in the Flash Storage space and you’ll hear their customers discuss the benefits and challeneges they faced when moving away from legacy spinning disk arrays and even newer hybrid arrays.Our goal is to educate our members one event at a time.

Cloud Natives looks to bring together all the datacenter technologies into one place. Whether it’s a focus on hypervisors, traditional or next-generation storage and infrastructure, cloud providers, DevOps and automation, or anything else that is hot in the datacenter, we will be that go-to resource in the Pacific Northwest. Each event is an opportunity to evaluate multiple vendors from the perspective of the customer. With no overlapping session schedules, you can walk away better informed and get any questions answered in one event.

I encourage everyone in the Portland area to register for this event at the Cloud Natives site. Our goal is to bring a sense of community back to Portland. We want to be a place to meet and network, to encourage, to mentor and to grow in our careers. No matter the stage in our career, we all have knowledge and experience that can help someone else and it’s time we all do our part to give back to the community.

The Beginning of Cloud Natives