Why DellEMC DPS for Azure Data Protection ?

Azure Backup is the Microsoft’s cloud-based service you can use to back up and restore your data in Microsoft Azure. Azure Backup offers multiple ways to deploy the solution based on what you want to backup.  All solution options, regardless of on-premises or cloud resources, can be used to backup data to a Recovery Services vault in Azure. Within the Azure portal in the Recovery Services vault, Microsoft provides a simple wizard to help determine which solution to deploy based on your needs.  You simply select either On-Premises or Azure as well as what you want to backup and you are provided with instructions based on the appropriate solution required. There are four primary ways to utilize Azure Backup. Each of these options are describe below:

  • Azure Backup Agent
  • System Center Data Protection Manager
  • Azure Backup Server
  • Azure IaaS VM Backup

Azure Backup Agent

Azure Backup Agent is a server-less agent that installs directly on a physical or virtual Windows Server.  The servers can be on-premises or in Azure.  This agent can backup files, folders      and system states directly to an Azure Recovery Services Vault up to 3 times per day.  This agent is not application aware and can only restore at the volume level.  Also, there is no support for Linux.

System Center Data Protection Manager (DPM)

DPM provides a robust enterprise backup and recovery solution with the ability to backup on-premises and in Azure. A DPM server can be deployed on-premises or in Azure.  DPM can be used to backup Application-aware solutions such as SQL Server, SharePoint, and Exchange.  DPM can also backup files and folders, System states, Bare Metal Recovery (BMR), as well as entire Hyper-V or VMWare VMs.  DPM can store data on disks, tape, or within Azure Recovery Services Vault. DPM supports backups of Window 7 or later client machines and Windows 2008 R2 SP1 or later servers. DPM cannot backup any Oracle, DB2 etc. workloads.Support for Linux-based machines is based on Microsoft’s endorsed list found here 

Azure Backup Server

Microsoft Azure Backup Server (MABS) is merely a slightly scaled-down version of System Center DPM. MABS is for customers that do not already have System Center DPM. MABS does not require any System Center licenses. MABS requires an Azure subscription to be active always. The primary differences between MABS and System Center DPM are as follows:

    • Does not support tape backups
    • No centralized System Center administration
    • Unable to back up another MABS instance
    • Does not integrate with Azure Site Recovery Services

Azure IaaS VM Backup

All Azure VMs can be directly backup up to a Recovery Services Vault with no agent installation or additional infrastructure required.  You can also backup all attached disks to a VM. This works for both Windows and Linux VMs. You can back up only once per day and only to Azure; on-premises backup is not supported. VMs are only restored at disk level. DellEMC’s approach is far ahead than what is being offered by native Azure options. We have multiple options which can be used to protect workloads in Azure:

1. Cloud Snapshot Manager for Azure:

This is a SaaS based offering which allows protection of Azure managed VMs to be protected on a snapshot basis in Azure object storage. The licensing is based on number of instances on subscription basis. No backup server or infrastructure is required for such configuration. More importantly, customer can have multiple Azure accounts and subscriptions and all can be protected and managed by a single CSM console User Interface. It literally takes 5-7 mins to start your first backup.

2. NetWorker and Data Domain / Avamar and Data Domain:

NetWorker and Avamar are DellEMC’s flagship software which can be deployed in a Azure VM and can be made to write to Data Domain Virtual Edition in Azure. NetWorker and DD allow customers to have no media servers by virtue of client direct and since DD has a single deduplication pool the storage savings are magnanimous. With continued investment in DD engineering we have now the functionality to deploy DD in Azure on object Storage which allows for more storage savings than ever. Since we are leveraging NetWorker and Avamar we get the functionality to integrate with any application or database that is hosted in Azure. The main benefits from such a solution are below:

    • No media servers required in Azure
    • Ability to protect data in de-duplicated format in Azure object storage
    • Wide application and database support.
    • Enterprise level backup performance and de-duplication
    • Both Data Domain and Avamar are available in Azure marketplace.


3. Data Domain and DDBEA:

Data Domain can protect workloads without integrating with any backup software and can protect both SQL and No SQL databases, by leveraging BOOST and BOOSTFS integration respectively. Customer can run the BOOSTFS tool and with help of DellEMC make any customer application write backups on DD with source based deduplication as well.  DDVE is available up to 96 TB in a single instance in Azure, the capacity of 96 TB comes from Azure BLOB storage so it is extremely cost efficient.

DellEMC DPS is one stop solution for enterprise data protection in and to Azure. Below are some market place links for DellEMC DPS solutions in Azure.

https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dellemc.dell-emc-avamar-virtual-edition  — Avamar in Azure Marketplace

https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dellemc.dell-emc-datadomain-virtual-edition-v4 — Data Domain in Azure Marketplace

https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dellemc.dell-emc-networker-virtual-edition — NetWorker in Azure Marketplace


SaaS Data Protection for AWS – Cloud Snapshot Manager

Few weeks back DellEMC released a newer version of Cloud Snapshot Manager. Just in case if you are not aware, it is a Software as a Service solution, fully operated by DellEMC providing our customers the control, automation and visibility over their AWS workload protection in cloud. Cloud Apps specifically applications in AWS are agile and can scale really fast due to nature of service in AWS, they need different kind of data protection. Yes we have NetWorker, DDVE, AVE and CloudBoost in AWS and they have their own use case. AWS workloads are a bit different, we do not see the hypervisor (which is by the way customized XEN) and have limited abilities in AWS (courtesy AWS.), this makes data protection in AWS a bit different. Below are some reasons which make traditional data protection not a complete solution for AWS workloads.

Issues with AWS DP

Now taking snaps from native EC2 (Elastic Cloud Compute – VM in AWS), EBS, RDS etc. by CSM has many benefits some of which are listed below:

  • Snapshots provide incremental forever protection, CSM calls and retains same snapshots which AWS natively uses, only this time with added benefits which we will see.
  • Snapshot are of EBS, EC2 storage, RDS machines, whereas native snapshots only support protection of EBS and RDS workload.
  • Snapshots are incremental forever and are compressed before they are written to S3 storage, since S3 is globally available data is in secure and durable storage.
  • Native snapshots cannot be restored in another region (without massive scripting), but by CSM its a simple restore, this also is beneficial in case a complete AWS region goes away.
  • Since CSM leverages AWS APIs for the snapshot and CSM portal infra is managed by DellEMC, customers do not have to manage backup server, backup storage etc. as I mentioned this is a SaaS service. This is not the case with Veritas Cloudpoint which is lot more difficult to manage. The same is true with Commvault, Veeam and Rubrik. They all have to deploy a backup server in AWS to get the backups started, with CSM you can start backups in 4-5 mins. That’s the whole promise of CLOUD – AGILITY.
  • Restores are much faster when restoring from Snapshots and snapshots can be taken even if the RDS, EC2 machines are down.
  • The only way to protect RDS – Relational Database service in AWS (which hosts Oracle, MSSQL, PostgreSQL, Aurora, Maria DB and MySQL) is via Snapshots, which CSM does promptly and by the way as of now Rubrik does not support data protection for RDS machines in AWS at all.


  • CSM allows you to have any retention in AWS (more than 35 days) for EC2, EBS, RDS etc. which is not possible with native AWS data protection.
  • CSM allows for resources such as EC2, EBS, RDS to be automatically protected by native AWS TAGS (tags are metadata specific to an organization that can be added to cloud resources). Tags can help in reporting, compliance, show-back and charge-back etc. CSM allows for automatic assignment of resources to protection policies to achieve auto-scaling for data protection. So that you can set it and forget it.
  • CSM supports multi-tenancy, support for backup of multiple AWS account, multiple regions, multiple availability zones with ONE CONSOLE, which as of today no other vendor has.
  • With new release of CSM, we have support for file level recovery from the snapshots! Native AWS snapshots do not support FLR from snapshots.
  • CSM has also added copying the snapshots to another region, which enables customer to have a proper DR plan, since if region X gets lost, they do not need to worry, since their backup console is with DellEMC and not in region X and their snapshots are at DR site (region Y).
  • CSM also can quiesce applications using VSS architecture for a application consistent snapshot for the Microsoft applications.
  • Normal scripting / AWS native snapshots etc. do not provide audit logs, reporting etc. whereas, the HTML 5 console of CSM does it all for any number of AWS accounts, regions etc.

Just in case you want to try it yourself, take it for a spin or ask your customer to use the trial version for 30 days at Cloud Snapshop Manager – Data Protection | Dell EMC US

Object Storage Demystified

I’ve seen a few definitions and watched a few presentations and I’ve never really been able to very easy and clearly articulate what object storage actually is! We all know it is an architecture that managed data as an object (rather than in blocks/sectors or a hierarchy) but I never really understood what an object was…! Might just be me being stupid but after a bit of reading I understood it a lot better once i understood the characteristics of an object e.g.

  • An object is independent of the application i.e. it doesn’t need an OS or an application to be able to make sense of the data. This means that a users can access the content (e.g. JPEG, Video, PDF etc) directly from a browser (over HTTP/HTTPS) rather than needing to use a specific application. This means no app servers required, dramatically improving simplicity and performance (of course you can still access object storage via an application if needed)
  • Object storage is globally accessible i.e. no requirement to move or copy data (locations, firewalls etc)… instead data is accessible from anywhere
  • Object storage is highly parallelized, what this means is that there are no locks on write operations meaning that we have the ability to have hundreds of thousands of users distributed around the world all writing simultaneously, none of the users need to know about one another and their behavior will not impact others. This is very different to traditional NAS storage where if you want it available in a secondary site it would need to replicated to another NAS platform which is sat passive and cannot be written to directly.
  • Object storage is linearly scalable i.e. there is no point at which we would expect performance to be impacted, it can continue to grow and there is no need to manage around limitations or constraints such as capacity or structure.
  • Finally it’s worth noting that object platforms are extensible, really all this means is that it has the ability to easily extend the capabilities without large implementation efforts, examples within this context is things like the ability to enrich data with meta-data and add policies such as retention, protection and where data cannot live (compliance).

Object storage is the way to organize data by addressing and manipulating discrete units of data called objects. Each object, like a file, is a stream of binary data. However, unlike files, objects are not organised in a hierarchy of folders and are not identified by its path in the hierarchy.  Each object is associated with a key made of a string when created, and you may retrieve an object by using the key to query the object storage. As a result, all of the objects are organized in a flat name space (one object cannot be placed inside another object). Such organisation eliminates the dependency between objects but retains the fundamental functionality of a storage system: storing and retrieving data. The main profit of such organisation is very high level of scalability.

Both files and objects have metadata associated with the data they contain, but objects are characterized by their extended metadata. Each object is assigned a unique identifier which allows a server or end user to retrieve the object without needing to know the physical location of the data. This approach is useful for automating and streamlining data storage in cloud computing environments. S3 and Swift are the most commonly used cloud object protocols. Amazon S3 (Simple Storage Service) is an online file storage web service offered by Amazon Web Services. OpenStack is a free and open-source software platform for cloud computing. The S3 protocol is the most commonly used object storage protocol.  So, if you’re using 3rd party applications that use object storage, this would be the most compatible protocol. Swift is a little bit less than S3, but still very popular cloud object protocol. S3 was developed by AWS and it’s API is open for third party developers. Swift protocol is managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012 to promote OpenStack software and its community. More than 500 companies have joined the project. Below are some major difference between S3 and SWIFT.

Unique features of S3:

  • Bucket-level controls for versioning and expiration that apply to all objects in the bucket
  • Copy Object – This allows you to do server-side copies of objects
  • Anonymous Access – The ability to set PUBLIC access on an object and serve it via HTTP/HTTPS without authentication.
  • S3 stores its objects in a bucket.

Unique features of SWIFT

SWIFT API  allows Unsized object create feature, Swift is the only protocol where you can use “Chunked” encoding to upload an object where the size is not known beforehand.  S3 require multiple requests to achieve this. SWIFT stores the objects in its “Containers”.

Authentication (S3 vs SWIFT)

S3 – Amazon S3 uses an authorization header that must be present in all requests to identify the user (Access Key Id) and provide a signature for the request. An Amazon access key ID has 20 characters. Both HTTP and HTTPS protocols are supported.

SWIFT – Authentication in Swift is quite flexible. It is done through a separate mechanism creating a “token” that can be passed around to authenticate requests. Both HTTP and HTTPS protocols are supported.

Retention and AUDIT (S3 vs SWIFT)

Retention periods are supported on all object interfaces including S3 and Swift. The controller API provides the ability to audit the use of the S3 and Swift object interfaces.

Large Objects (S3 vs SWIFT)

S3 Multipart Upload allows you to upload a single object as a set of parts. After all of these parts are uploaded, the data will be presented as a single object. OpenStack Swift Large Object is comprised of two types of objects: segment objects that store the object content, and a manifest object that links the segment objects into one logical large object. When you download a manifest object, the contents of the segment objects will be concatenated and returned in the response body of the request.

So which object storage API to use ? Well, both have their benefits, at specific use cases. DellEMC ECS is an on-premise object storage solution which allows users to have multiple object protocols like S3, SWIFT, CAS, HTTPS, HDFS, NFSv3 etc all in a single machine. It is built on servers with their DAS storage running ECS software and is also available in software format which can be deployed on your own servers.


There are many benefits of using ECS as your own object storage: Continue reading “Object Storage Demystified”

Buying a Software Defined Storage Solution

I am back! and not with a data protection writeup, because last few weeks I have not been able to put away the thought of Software Defined Storage. So once, again here is a backup guy talking about storage. I was curious about the current state of software defined storage in the industry and decided to get my hands dirty.  I’ve done some research and reading on SDS over the course of the last month(well actually more than this) and this is a cruxof what I’ve learned from my teammates, customers and people with whom I work.

What is SDS?

The term is very broadly used to describe many different products with various features and capabilities. It is the term for defining the trend towards data storage becoming independent of the underlying hardware, so basically no more fancy SAN boxes as per SDS, simply put. SDS is nothing but a data storage software that includes policy-based provisioning, management of data storage that is independent of the underlying hardware, virtualization or OS. So basically it is Hardware Agnostic, which is good news considering the prices of servers and DAS.

I first looked at IDC and Gartner. IDC defines software defined storage solutions as solutions that deploy controller software (the storage software platform) that is decoupled from underlying hardware, runs on industry standard hardware, and delivers a complete set of enterprise storage services. Gartner defines SDS in two separate parts, Infrastructure and Management:

  • Infrastructure SDS uses commodity hardware such as x86 servers, JBOD and offer features through software orchestration. It creates and provides data center services to replace or augment traditional storage arrays.
  • Management SDS controls hardware but also controls legacy storage products to integrate them into a SDS environment. It interacts with existing storage systems to deliver greater agility of storage services.

Keep Calm and use SDS

The general Attributes of SDS

There are many characteristics of SDS in fact each vendor adds a new dimension to the offering, only making it better in long run, however I have put some key characteristics below, which you should give a look at:

  • Hardware and Software Abstraction – SDS always includes abstraction of logical storage services and capabilities from the underlying physical storage systems. It does not really matter to SDS software whether a server has SAS, SATA, SSD, PCIe card as storage, all is welcome.
  • Storage Virtualization – External-controller based arrays include storage virtualization to manage usage and access across the drives within their own pools, other products exist independently to manage across arrays and/or directly attached server storage.
  • Automation and Orchestration –SDS includes automation with policy-driven storage provisioning, and service-level agreements (SLAs) generally replace the precise details of the actual hardware.
  • Centralized Management –SDS includes management capabilities with a centralized point of management.
  • Enterprise storage features –SDS includes support for all the features desired in an enterprise storage offering, such as compression, deduplication, replication, snapshots, data tiering, and thin provisioning.

When and How to use SDS

There are a host of considerations when developing a software defined storage strategy. Below is a list of some of the important items to consider during the process.

  • Storage Management – You need to know how does a storage works, no not just the IOPS, performance but other details such as queue depth in an OS, how does different applications see same storage differently etc.  You will have to test different settings and get the necessary performance for your environment. Not All applications require read cache, compression, however some do, just an example how detailed this can be. the more detailed you go, the better SLA as a storage admin you deliver. Most of the times, these all nuances are overlooked which cost dearly in any environment. So while designing or purchasing even a normal storage understand your application, because you buy storage not cause of features but for your application which runs your business. Massive research is required for choosing from so many options like (vSAN, ScaleIO, Ceph etc.), all you need to remember is that you are doing all this for your application, not to save money, or get fancy IOPS but deliver the needed SLA.
  • Cloud  and Microservices Integration – Yes Probably today, you are not using cloud and you may also not require Jenkins for your application deployment, but its only a matter of time before this all is required. IT is moving faster than light these days (not really !) and growth of data has created new avenues as to where all it ca be used. Data is the new OIL. An intelligent SDS software has the capability to tier data across cloud platforms or even on a “cheaper storage”, does you software defined storage (in case of Object for example) support s3, REST APIs, SWIFT, HDFS, CAS etc. all at once. Designing of the SDS solution should be such that it is future ready. To give you an outline, if you are using a SDS solution for block storage, you should look out for Docker, Virtualization, COTS integration, understand your vendor’s roadmap and see if it aligns with your present and future requirements.
  • Expansion and Scalability – So then how much can your SDS solution can really scale. 10 nodes ? 100 nodes ? 1000 nodes ? , well the answer lies in your requirement, very few organizations require 1000 nodes+ , but when we talk about scalability, we also look at scalability without drop in performance. A lot of vendors may be able to scale, but if performance suffers, we are back to square one. There are different parameters to judge performance (easier way is to just mentioned IOPS and throughput) ranging from SLA, service catalog and stress testing. While procuring a SDS solution remember the performance should only increase with increase in size (of capacity, nodes, controllers etc.). The other important thing to remember is how many sites can it support?  A good SDS solution (block, Object etc.) should be able to support multiple sites across the globe and by this I obviously mean should be able replicate the data in selective format across sites and can be managed from a single management console.
  • Architecture Matters – How does your SDS solution transfer data from host to the device, how does an actual read and write occur and how it is different from the solution, what suits you. You will have to go into the details and understand it.  In case of SDS the Architecture and details matter much more, because probably you are no longer having the luxury if the old SAN box, what you now have are servers and disks in them, how do you get performance? by knowing the software which you procured. You need to understand networking, OS, Hyper-Visor, Disk, RAM 101 and 201 details. You should look out for solutions which are stateless and do not depend on specific processes to complete to move forward, thus averting bottlenecks. Have multiple detailed discussion with your vendor, the more you learn now, the less service issues you will have later.
  • Test, Test and Test Again before GO LIVE – Understanding your application is one thing and knowing how it will behave in your environment  is another, so before you cut the red ribbon, TEST. Make sure you have left no stone un-turned, Yes, probably you cannot do this for all applications in your setup, but Tier -1 application deserve this. Don’t shy, you will thank yourself later. Another important point I would like to make is, understand how the data will be migrated from old legacy SAN array to SDS solution, what will be the implications, will there be a downtime if yes, then how much and how to minimize it. One of the original purposes of SDS was to be hardware agnostic so there should be no reason to remove and replace all of your existing hardware. A good SDS solution should enable you to protect your existing investment in hardware as opposed to requiring a forklift upgrade of all of your hardware. A new SDS implementation should complement your environment and protect your investment in existing servers, storage, networking, management tools and employee skill sets.
  • Functionalities and Features – So does your SDS solution perform deduplication ? Compression ? Encryption ? Erasure Coding ? Replication ? Let’s step back a bit. How many of these features do you actually need in your applications. Probably Encryption, Compression, Replication, for a performance block storage or even Erasure Coding on an object storage. Do not go for functionalities, if you will never use them, think of these as the side missions to you actual requirement. Understand what you really need, performance ? Scale ? Availability ? IOPS ? and then decide.

There are many use cases and benefits of SDS (File, Block and Object) like Expansion, Automation, Cloud Integration, reduction in operational and management expenses, Scalability, Availability and durability, ability to leverage COTS and so and so forth. Few weeks back I did write a comparison between two major SDS – block vendors, you can check it out here. There are many vendors starting from DellEMC (ScaleIO), VMware vSAN , DellEMC (ECS) and so and so forth. The only key I would want you to take away is to learn you environment and understand you setup in detail and then choose what suits you the best!