I

XIOLOGIX BACKUP AND RECOVERY FOR “JUST IN CASE”

Our services go well beyond traditional disaster relief. We help companies build resilience into every layer of their business by anticipating the potential impact of a wide range of threats. Xiologix will guide you in ensuring that you have the appropriate backup and recovery solution for your business needs.

Backup and Recovery

Our services go well beyond traditional disaster relief. We help companies build resilience into every layer of their business by anticipating the potential impact of a wide range of threats. Xiologix will guide you in ensuring that you have the appropriate backup and recovery solution for your business needs.

Veeam – Technical Advantages For The Enterprise Customers

An overview of the disaster and data recovery solutions Veeam offers for VMware and Hyper-V environments and Veeam’s unique feature set.

Why Data Domain

Deduplication solutions are not all create equal, why choose Data Domain? 

All about backup: Why businesses choose the cloud

All about backup: Why businesses choose the cloudThere’s no one reason why companies move to the cloud. You hear many reasons: cost is often mentioned, although a bit less these days. You often hear security cited, as providers have more to spend on technical excellence. And you certainly hear flexibility mentioned; it’s probably the main reason at present. What’s not generally spoken about is the ease of disaster recovery, yet this is something that touches on all the elements mentioned above. By replicating virtual machines into a private cloud, you’re reducing costs, certainly compared with the cost of installing physical hardware. It’s a flexible solution, as it can scale to cope with unexpected demands. And finally, by using a variety of security tools on the network, you can ensure that your data is safe from prying eyes.Many businesses have discovered these advantages. According to the ZDNet cloud priorities survey, a third of organisations said backup and recovery was the most important infrastructure as a service (IaaS) function to them; it was the leading choice.But that’s not the whole story; cloud on its own doesn’t cut it. Smart companies use a hybrid approach to maximise performance and minimise costs. In this scenario, mission-critical data is backed up on-premise and then, automatically, transferred to the cloud over time. A hybrid product like Microsoft’s StorSimple can support this approach, transferring data to the most appropriate medium. In this case, the decision-making is taken away from the CIO, and the solution is both cost-effective and efficient. It also means there is no need to calculate how much capacity is needed – that in itself gets rid of a major headache for the CIO.Recent research supports the rapid uptake of cloud backup. An April 2015 survey by Forrester found that 56 percent of companies were backing up tier-2 applications at least once a day, a rate that has doubled in the past three years.In the end it’s the hybrid approach that is going to bring in dividends: it’s not a case of all public or all private but the best fit. Forrester points out that companies with large databases which require frequent backups should stick with on-premise solutions while medium-sized tier-2, tier-2, and tier-3 datasets could all be moved to the public cloud.

Don’t let an MBA near your disaster recovery plan

When you work at an entity that collects sensitive, real-time data and is responsible for keeping it up-to-date and available to certain public institutions, you’d think a solid backup and disaster recovery plan would be high on the list of organizational priorities. In theory, yes — but all it takes is one hotshot manager to break what didn’t need to be fixed in the first place.With this corporate body, branch offices were located in cities throughout several states, and at one time each office maintained its own semi-autonomous IT infrastructure. The sites had their own redundant file servers, database servers, and authentication servers, as well as on-premises IT staff.One day a new IT director, “Julius,” showed up. He was an MBA who had saved a string of companies lots of money by virtualizing their server infrastructures. While he had lots of experience working with relatively small companies, his experience with large enterprises spread across wide geographic areas was limited.Virtualization is of course a great way to get more efficiency from your servers and add a level of flexibility that was never available before, but unfortunately Julius ignored some fundamentals of business continuity in his infrastructure design. Having all of your eggs in one basket can make them a lot easier to carry — but you know how that cliche works out. Part of the problem or part of the solution?In his first week in the new role, Julius held a meeting with all of the IT managers and laid out his grand vision for the new server infrastructure. Instead of each site having its own small server farm, they would all be centralized in the business office’s data center.As the meeting went on, manager reactions began to follow a pattern: The greater their technical expertise, the greater their discomfort with the changes. The biggest concerns brought up: Will the virtual servers have sufficient performance to keep up with the individual sites’ needs? Is there enough bandwidth to serve all the satellite offices? Also, what happens if the central office’s data center was unavailable?Julius brushed the questions aside with platitudes and jargon, “This is a great opportunity to synergize our infrastructure and reap the benefits of increased operational efficiencies.” Finally, with a note of frustration in his voice, he stopped the discussion and simply warned, “This is happening, so are you going to be part of the problem or part of the solution?”Despite the managers’ concern, Operation Egg Basket proceeded. Several beefy servers were purchased and set up with a common virtualization platform. One at a time, the individual sites’ servers were virtualized, except for the domain controllers, and the old equipment was decommissioned. There were some performance issues, but they were addressed by tweaking the hypervisor. There were also bandwidth issues, but QoS, traffic filtering, and bandwidth upgrades took care of them. After about a year, the job was done, and Julius patted himself on the back for another successful virtualization rollout. For months everything seemed to work great — until it didn’t.First the disaster, then the recoveryCome spring of that year, a violent thunderstorm rolled through and a tornado touched down a mile away from the central business office. The electrical and telephone poles were flattened like grass in a lawn mower, taking out all related service in the area.The data center had a giant backup generator, so the power loss was no big deal — until someone realized that the diesel tank was almost empty. That was easily rectified by some urgent phone calls, although this was a significant detail to have overlooked.However, the real problem was the loss of the fiber optic link to the data center. All network traffic in the company was configured to route through the central office, so the satellite offices lost access to needed services. They couldn’t even get out to the Internet because the proxy server was at the central office. Most of the VoIP telephones were down in the enterprise, as was voicemail: No file servers, no application servers, no databases, nothing.For the better part of two days, while the phone company scrambled to get the fiber optic lines back up, the whole company remained down. Workers still had to report to their offices because lots of manual assignments needed to be done, but it was now much harder and slower to do. Very likely, a ton of work simply went undocumented. Finally, the phone company reestablished the lines, and everything started functioning again.A silver liningAfter this incident, Julius saw the writing on the wall and graciously departed for another position at another company — probably peddling his specialty again, but hopefully a bit wiser.A new manager who specialized in disaster recovery was brought in, and the infrastructure was overhauled once again, this time to ensure redundancy and resilience by eliminating single points of failure. A hot backup data center was brought online in case the primary went away, and the most critical systems were placed back in the individual satellite offices again.Ultimately, there was an upside to the fiasco. We ended up with a highly resilient infrastructure that properly utilized virtualization while maintaining the other fundamentals of business continuity. Namely: Don’t keep all your eggs in one basket!

How to make cloud backup a no-brainer

The 3-2-1 rule is simple and easy to remember for your backup strategy: You want three copies of your data, in two different formats, with one of those copies offsite. Having three copies ensures redundancy (because they should be in different locations and formats). Using different formats goes back to the tape/disk/optical mindset of the past.Many organizations have moved away from tape as primary backup to hard disk, using tape as a secondary. But cloud backups are becoming the new normal for the 3-2-1 rule where you back up first to disk and then to the cloud.The idea of backing up with a cloud storage repository as one of those three copies is beneficial in theory, providing an offsite copy of your data should you need it. And all without tapes or drives sent to an offsite facility. Instead, the backup goes right over your Internet connection to a cloud vendor’s storage.But the reality is not always so rosy. The cost for putting your backups into a hosted cloud repository could be prohibitive. You may be dealing with limited bandwidth, and even if you can increase it o cover your storage volume, doing so comes at more cost. If you host your Infrastructure in the cloud, don’t think you’re backed up. You’re not. So you really need to ensure you have redundancy of your data, no matter where it resides.There are a ton of cloud backup options out there, some of which offer simple cloud storage that you can just dump raw data into, and some provide the kind of backup and recovery capabilities familiar in the data center. Don’t make the mistake of thinking all cloud backup offerings are the same. You have to do your due diligence and find a solution that works for you.You may in fact be looking for two different solutions: one to handle the backup and the other to hold the backup. For example, you may want to use Microsoft Azure for your storage provider with a front-end tool like Veeam Cloud Connect to manage the backup of VMs.Veeam Cloud Connect offers an SSL backup (deduplicated and compressed) to a partner service provider (Azure, for example) and optionally includes WAN acceleration to help offset the bandwidth issue. It also offers “forever incremental backups” (deltas) that allow for less data to travel to the cloud. You can also do traditional grandfather-father-son (GFS) backup. End-to-end encryption of the data ensures security in transit and in storage. Commvault’s Unified Cloud Data Protection offering is similar.Then there’s Nasuni, whose hybrid cloud storage offering combines local storage controllers and cloud storage..No matter what provider’s approach you favor, it’s important to have a clear understanding of pricing (including storage costs and fees for storing and retrieving your data), the features offered, and overall data life cycle management (if you choose to index the data for easy retrieval).With that information in place, backing up up to the cloud as part of your 3-2-1 strategy is as easy as 1-2-3.

Better back up and recovery at Syracuse University

This article first appeared in the Spring 2016 issue of OnWindows.Located in the US state of New York, Syracuse University (SU) is comprised of some 22,000 students, faculty and staff. When the university decided to consolidate around 20 small data centres and build a new, modern data centre with a private cloud infrastructure, challenges around providing enterprise-scale backup and disaster recovery remained. The university required a unified virtual machine (VM) back up and recovery solution that would enable 24/7 availability among its thousands of VMs and scale to support its data growth.Previously, the team at SU were only able to back up its most critical VMs, and were limited to once or twice a week. Furthermore, one in every ten backup jobs was unsuccessful. SU’s 22,000 students, faculty and staff require around the clock access to files, databases and applications. The university’s modern data centre also delivers infrastructure-as-a-service (IaaS) to business and academic units, and has specific availability requirements.“Our IaaS offering focuses on providing networking and computing resources,” said Josh Slade, virtual and back up environment lead at Syracuse University. “Schools, colleges and business units drop VMs into our cloud with the expectation that we will provide data protection. We needed a back up solution that leveraged changed block tracking to reduce the amount of data undergoing back up, supported Microsoft Volume Shadow Copy Service (VSS) to ensure successful VM recovery and was scalable to grow with us.”SU deployed the Veeam Backup & Replication solution and, as a result, data centre administrators are now seeing a range of benefits.As well as the user interface, the solution provides a Windows PowerShell extension that enables SU to create a set of scripts to automate back up and recovery of its 2,300 VMs within its VMware vSphere environment.A recent disaster recovery test saw 1,500 VMs successfully recovered within 24 hours, leading to the SU estimating that an entire data centre could be restored in a similar timeframe.SU uses nightly incremental Veeam backups and sends them to virtual repositories in a secondary data centre. Currently, the virtual repositories hold two petabytes of data and 30 retention points. As part of the disaster recovery plan, SU has created a repository of custom scripts to restore every VM from the Veeam repositories to waiting storage and hosts in the secondary data centre.Veeam uses application-aware image processing to create transactionally consistent backups of VMs running VSS-aware applications such as Microsoft Active Directory, Exchange, SharePoint and SQL Server for SU.Thanks to Veeam’s technology, SU has the scalability and flexibility to support explosive data growth, and is benefitting from 98% faster backup and a high availability for DR.“We’ve used Veeam for so long that we assume backup and recovery will be successful, and it is,” said Peter Pizzimenti, IT analyst at SU. “There’s nothing else on the market that we were able to find that could work in an infrastructure our size with the approach we have taken except Veeam.”

Two Challenges Facing Data Backup/Disaster Recovery Market

Backup and disaster recovery services have grown into a lucrative business space over the past 15 years, but the combination of increased access to data as well as a 24/7 interconnected work world have complicated the mission goals of solution providers.“We want to build a world without technology interruptions for your businesses, for your IT, and for customers,” said Boston-based Zerto founder and CEO Ziv Kedem.Kedem said that mission statement sounds simple, but “this is a difficult problem, and it actually is becoming more difficult as IT is changing.”One factor challenging these services is the sheer increase in the volume of data.“Every business understands the value of data, the value of applications,” Kedem said.However, those data and applications highly increase and complicate the workload for even simple backup and recovery plans.Secondly, Kedem said, “You expect more from your applications.” “Customers expect that [applications] need to be always available, always responsive, always on, always updated,” he said.That leaves little room for error or downtime, especially when a company is responding to a major disaster.

8 ingredients of an effective disaster recovery plan

Earlier this month, a monkey caused a nationwide power outage in Kenya. Millions of homes and businesses were without electricity. Which just goes to show that “not all disasters come in the form of major storms with names and categories,” says Bob Davis, CMO, Atlantis Computing.“Electrical fires, broken water pipes, failed air conditioning units [and rogue monkeys] can cause just as much damage,” he says. And while “business executives might think they’re safe based on their geographic location,” it’s important to remember that “day-to-day threats can destroy data [and] ruin a business,” too, he says. That’s why it is critical for all businesses to have a disaster recovery (DR) plan.[ Related: A guide to disaster recovery planning ]However, not all DR plans are created equal. To ensure that your systems, data and personnel are protected and your business can continue to operate in the event of an actual emergency or disaster, use the following guidelines to create a disaster plan that will help you quickly recover. 1. Inventory hardware and software. Your DR plan should include “a complete inventory of [hardware and] applications in priority order,” says Oussama El-Hilali, vice president of Products for Arcserve. “Each application [and piece of hardware] should have the vendor technical support contract information and contact numbers,” so you can get back up and running quickly.2. Define your tolerance for downtime and data loss. “This is the starting point of your planning,” says Tim Singleton, president, Strive Technology Consulting. “If you are a plumber, you can probably be in business without servers or technology [for] a while. [But] if you are eBay, you can’t be down for more than seconds. Figuring out where you are on this spectrum will determine what type of solution you will need to recover from a disaster.”“Evaluate what an acceptable recovery point objective (RPO) and recovery time objective (RTO) is for each set of applications,” advises says David Grimes, CTO, NaviSite. “In an ideal situation, every application would have an RPO and RTO of just a few milliseconds, but that’s often neither technically nor financially feasible. By properly identifying these two metrics businesses can prioritize what is needed to successfully survive a disaster, ensure a cost-effective level of disaster recovery and lower the potential risk of miscalculating what they’re able to recover during a disaster.”“When putting your disaster recovery plan in writing, divide your applications into three tiers,” says Robert DiLossi, senior director, Testing & Crisis Management, Sungard Availability Services. “Tier 1 should include the applications you need immediately. These are the mission-critical apps you can’t do business without. Tier 2 covers applications you need within eight to 10 hours, even up to 24 hours. They’re essential, but you don’t need them right away. Tier 3 applications can be comfortably recovered within a few days,” he explains. “Defining which applications are most important will aid the speed and success of the recovery. But most important is testing the plan at least twice per year,” he says. “The tiers might change based on the results, which could reveal unknown gaps to fill before a true disaster.”3. Lay out who is responsible for what – and identify backup personnel. “All disaster recovery plans should clearly define the key roles, responsibilities and parties involved during a DR event,” says Will Chin, director of cloud services, Computer Design & Integration. “Among these responsibilities must be the decision to declare a disaster. Having clearly identified roles will garner a universal understanding of what tasks need to be completed and who is [responsible for what]. This is especially critical when working with third-party vendors or providers.  All parties involved need to be aware of each other’s responsibilities in order to ensure the DR process operates as efficiently as possible.”“Have plans for your entire staff, from C-level executives all the way down, and make sure they understand the process,” and what’s expected of them, says Neely Loring, president, Matrix, which provides cloud-based solutions, including Disaster-Recover-as-a-Service. “This gets everyone back on their feet quicker.”“Protocols for a disaster recovery (DR) plan must include who and how to contact the appropriate individuals on the DR team, and in what order, to get systems up and running as soon as possible,” adds Kevin Westenkirchner, vice president, operations, Thru. “It is critical to have a list of the DR personnel with the details of their position, responsibilities [and emergency contact information].”“One final consideration is to have a succession plan in place with trained back-up employees in case a key staff member is on vacation or in a place where they cannot do their part [or leaves the company],” says Brian Ferguson, product marketing manager, Digium. 4. Create a communication plan. “Perhaps one of the more overlooked components of a disaster recovery plan is having a good communication plan,” says Mike Genardi, solutions architect, Computer Design & Integration. “In the event a disaster strikes, how are you going to communicate with your employees? Do your employees know how to access the systems they need to perform their job duties during a DR event? “Many times the main communication platforms (phone and email) may be affected and alternative methods of contacting your employees will be needed,” he explains. “A good communication plan will account for initial communications at the onset of a disaster as well as ongoing updates to keep staff informed throughout the event.”“Communication is critical when responding to and recovering from any emergency, crisis event or disaster,” says Scott D. Smith, chief commercial officer at ModusLink. So having “a clear communications strategy is essential. Effective and reliable methods for communicating with employees, vendors, suppliers and customers in a timely manner are necessary beyond initial notification of an emergency. Having a written process in place to reference ensures efficient action post-disaster and alignment between organizations, employees and partners.” Page 2 of 2 “A disaster recovery plan should [also] include a statement that can be published on your company’s website and social media platforms in the event of an emergency,” adds Robert Gibbons, CTO, Datto, a data protection platform. And be prepared to “give your customers timely status updates on what they can expect from your business and when. If your customers understand that you are aware of the situation, you are adequately prepared and working to take care of it in a timely manner, they will feel much better.”[ Related: Disaster recovery in a DevOps world ]5. Let employees know where to go in case of emergency – and have a backup worksite. “Many firms think that the DR plan is just for their technology systems, but they fail to realize that people (i.e., their employees) also need to have a plan in place,” says Ahsun Saleem, president, Simplegrid Technology. “Have an alternate site in mind if your primary office is not available. Ensure that your staff knows where to go, where to sit and how to access the systems from that site. Provide a map to the alternate site and make sure you have seating assignments there.”“In the event of a disaster, your team will need an operational place to work, with the right equipment, space and communications,” says DiLossi. “That might mean telework and other alternative strategies need to be devised in case a regional disaster causes power outages across large geographies. Be sure to note any compliance requirements and contract dedicated workspace where staff and data can remain private. [And] don’t contract 50 seats if you’ll really need 200 to truly meet your recovery requirements.”6. Make sure your service-level agreements (SLAs) include disasters/emergencies. “If you have outsourced your technology to an outsourced IT firm, or store your systems in a data center/co-location facility, make sure you have a binding agreement with them that defines their level of service in the event of a disaster,” says Saleem. “This [will help] ensure that they start working on resolving your problem within [a specified time]. Some agreements can even discuss the timeframe in getting systems back up.”7. Include how to handle sensitive information. “Defining operational and technical procedures to ensure the protection of…sensitive information is a critical component of a DR plan,” says Eric Dieterich, partner, Sunera. “These procedures should address how sensitive information will be maintained [and accessed] when a DR plan has been activated.”8. Test your plan regularly. “If you’re not testing your DR process, you don’t have one,” says Singleton. “Your backup hardware may have failed, your supply chain may rely on someone incapable of dealing with disaster, your internet connection may be too slow to restore your data in the expected amount of time, the DR key employee may have changed [his] cell phone number. There are a lot of things that may break a perfect plan. The only way to find them is to test it when you can afford to fail.”“Your plan must include details on how your DR environment will be tested, including the method and frequency of tests,” says Dave LeClair, vice president, product marketing, Unitrends, a cloud-based IT disaster recovery and continuity solution provider. “Our recent continuity survey of 900 IT admins discovered less than 40 percent of companies test their DR more frequently than once per year and 36 percent don’t test at all.  “Infrequent testing will likely result in DR environments that do not perform as required during a disaster,” he explains. “Your plan should define recovery time objective (RTO) and recovery point objective (RPO) goals per workload and validate that they can be met. Fortunately, recovery assurance technology now exists that is able to automate DR testing without disrupting production systems and can certify RTO and RPO targets are being met for 100 percent confidence in disaster recovery even for complex n-tier applications.”   Also keep in mind that “when it comes to disaster recovery, you’re only as good as your last test,” says Loring. “A testing schedule is the single most important part of any DR plan. Compare your defined RTO and RPO metrics against tested results to determine the efficacy of your plan. The more comprehensive the testing, the more successful a company will be in getting back on their feet,” he states. “We test our generators weekly to ensure their function. Always remember that failing a test is not a bad thing. It is better to find these problems early than to find them during a crisis. Decide what needs to be modified and test until you’re successful.”And don’t forget about testing your employees. “The employees that are involved need to be well versed in the plan and be able to perform every task they are assigned to without issue,” says Ferguson. “Running simulated disasters and drills help ensure that your staff can execute the plan when an actual event occurs.” This story, “8 ingredients of an effective disaster recovery plan” was originally published by CIO.

What is backup and recovery?

Backup refers to the copying of physical or virtual files or databases to a secondary site for preservation in case of equipment failure or other catastrophe. The process of backing up data is pivotal to a successful disaster recovery (DR) plan. Corporate E-mail Address: Please provide a Corporate E-mail Address. By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy. What is backup and recovery? Enterprises back up data they deem to be vulnerable in the event of buggy software, data corruption, hardware failure, malicious hacking, user error or other unforeseen events. Backups capture and synchronize a point-in-time snapshot that is then used to return data to its previous state. Backup and recovery testing examines an organization’s practices and technologies for data security and data replication. The goal is to ensure rapid and reliable data retrieval should the need arise. The process of retrieving backed-up data files is known as file restoration. The terms data backup and data protection are often used interchangeably, although data protection encompasses the broader goals of business continuity, data security, information lifecycle management, and the prevention of malware and computer viruses. What data should be backed up and how frequently? A backup process is applied to critical databases or related line-of-business applications. The process is governed by predefined backup policies that specify how frequently the data is backed up and how many duplicate copies (known as replicas) are required, as well as by service-level agreements (SLAs) that stipulate how quickly data must be restored. Best practices suggest a full data backup should be scheduled to occur at least once a week, often during weekends or off-business hours. To supplement weekly full backups, enterprises typically schedule a series of differential or incremental data backup jobs that back up only data that has changed since the last full backup took place. Backup storage media Enterprises typically back up key data to dedicated backup appliances or magnetic tape systems. Data deduplication systems contain hard disk drives (HDDs) and are equipped with software for setting backup policies. Disk-to-disk backup systems initially appeared as an alternative to magnetic backup tape drive libraries. Both disk and tape are still used today, and often in conjunction. As file sizes have increased, some backup vendors have brought integrated data protection appliances to market in an effort to simplify the backup process. An integrated data appliance is essentially a file server outfitted with HDDs and vendor-developed backup software. These plug-and-play data storage devices often include automated features for monitoring disk capacity, expandable storage and preconfigured tape libraries. Most disk-based backup appliances allow copies to be moved from spinning media to magnetic tape for long-term retention. Magnetic tape systems are still used as backup media due to increasing tape densities and the rise of linear tape file systems. A virtual tape library (VTL) provides a less costly option to a deduplication array. A VTL is a disk-based system whose behavior mimics that of a physical tape library. Solid-state drives (SSDs) generally are not used for data backup because of endurance concerns. Some storage vendors include SSDs as a caching or tiering tool for managing writes with disk-based arrays. Data is initially cached in flash storage and then written to disk. Local backup vs. offline backup for primary storage Modern primary storage systems have evolved to feature stronger native capabilities for data backup. These features include advanced RAID protection schemes, unlimited snapshots, and tools for replicating snapshots to secondary backup or even tertiary off-site backup. Despite these advances, primary storage-based backup tends to be more expensive and lacks the indexing capabilities found in traditional backup products. Data deduplication, for example, first appeared in EMC Data Domain backup appliances but is gradually becoming a baseline feature of branded, primary storage arrays. Local backups place data copies on external HDDs or magnetic tape systems, typically housed in or near an on-premises data center. The data is transmitted over a secure high-bandwidth network connection or corporate intranet. One advantage of local backup is the ability to back up data behind a network firewall. Local backup is also much quicker and provides greater control over who can access the data. Offline or cold backup is similar to local backup, although it is most often associated with backing up a database. An offline backup incurs downtime since the backup process occurs while the database is disconnected from its network. Backup and cloud storage Conversely, off-site backup transmits data copies to a remote location, which can include a company’s secondary data center or leased colocation facility. Increasingly, off-site data backup equates to subscription-based cloud storage as a service, which provides low-cost, scalable capacity and eliminates the customer’s need to purchase and maintain backup hardware. Despite its growing popularity, electing backup as a service requires users to encrypt data and take other steps to safeguard data integrity. Cloud backup is divided into the following: Most backup vendors enable local applications to be backed up to a dedicated private cloud, effectively treating cloud-based data backup as an extension of a customer’s physical data center. Also known as disaster recovery as a service, this maturing field allows an organization to lease space on a service provider’s storage servers for centralized backup and management of lifeline data. Cloud-to-cloud data backup is an alternative approach that has been gaining momentum. Using this method, a customer’s data is copied from one cloud backup platform to another cloud. It also refers to cloud-based backups of data stored on software-as-a-service platforms. Backup storage for PCs and mobile devices PC users can consider both local backup from a computer’s internal hard disk to an attached external hard drive or removable media such as a thumb drive. Another alternative for consumers is to back up data on smartphones and tablets to personal cloud storage, which is available from vendors such as Box, Carbonite, Dropbox, Google Drive, Microsoft OneDrive and others. These services are commonly used to provide a certain capacity for free, giving consumers the option to purchase additional storage as needed. Unlike enterprise cloud storage as a service, these consumer-based cloud offerings generally do not provide the level of data security businesses require. Backup software and hardware vendors Vendors that sell backup hardware platforms include Barracuda Networks, Dell, Drobo, EMC (Data Domain), ExaGrid Systems, Hewlett Packard Enterprise, Hitachi Data Systems (including Sepaton), IBM, NEC Corp., NetApp, Oracle Storage Tek (tape libraries), Quantum Corp., Spectra Logic, Unitrends and Veritas NetBackup (formerly Symantec NetBackup). Leading enterprise backup software vendors include Acronis, Arcserve, Asigra, Commvault, Datto, Druva, EMC Data Protection Suite (Avamar, Data Protection Advisor, Mozy, NetWorker and SourceOne), EMC RecoverPoint replication manger, Nakivo and Veeam Software. The Microsoft Windows Server operating system inherently features the Microsoft Resilient File System (Microsoft ReFS) to automatically detect and repair corrupted data. While not technically data backup, Microsoft ReFS is geared to be a preventive measure for safeguarding file system data against corruption. VMware vSphere provides a suite of backup tools for data protection, high availability and replication. The VMware vStorage APIs for Data Protection (VADP) allows VMware or supported third-party backup software to safely take full and incremental backups of virtual machines (VMs). VADP implements backups via hypervisor-based snapshots. As an adjunct to data backup, VMware vSphere live migration allows VMs to be moved between different platforms to minimize the impact of a DR event. VMware Virtual Volumes also figure to aid VM backup. Backup robots A backup robot is an automated USB 2.0 external storage device that supports multiple removable Serial ATA hard drives. The first instance of a digital backup robot was introduced by Drobo, then operating as Data Robotics. Rather than use a robotic arm to manipulate hardware, the backup robot would automatically format and distribute data between the various hard drives inside of it, using storage virtualization technology to back up each drive to the other drives. Image of a Drobo backup robot Software features have largely replaced the mechanical robotics in tape archive and backup systems. Backup types defined Full backup captures a copy of an entire data set. Although considered to be the most reliable backup method, performing a full backup is time-consuming and requires a large number of disks and/or tapes. Most organizations run full backups only periodically. Incremental backup offers an alternative to full backups by backing up only the data that has changed since the last full backup. The drawback is that a full restore takes longer if an incremental-based data backup copy is used for recovery. Differential backup copies data changed since the last full backup. This enables a full restore to occur more quickly by requiring only the last full backup and the last differential backup. For example, if you create a full backup on Monday, the Tuesday backup would, at that point, be similar to an incremental backup. Wednesday’s backup would then back up the differential that has changed since Monday’s full backup. The downside is that progressive growth of differential backups tends to adversely affect your backup window. A differential backup spawns a file by combining an earlier complete copy of it with one or more incremental copies created at a later time. The assembled file is not a direct copy of any single current or previously created file, but rather synthesized from the original file and any subsequent modifications to that file. Synthetic full backup is a variation of differential backup. In a synthetic full backup, the backup server produces an additional full copy, which is based on the original full backup and data gleaned from incremental copies. Incremental-forever backups minimize the backup window while providing faster recovery access to data. An incremental-forever backup captures the full data set and then supplements it with incremental backups from that point forward. Backing up only changed blocks is also known as delta differencing. Full backups of data sets are typically stored on the backup server, which automates the restoration. Reverse-incremental backups are changes made between two instances of a mirror. Once an initial full backup is taken, each successive incremental backup applies any changes to the existing full. This essentially generates a novel synthetic full backup copy each time an incremental change is applied, while also providing reversion to previous full backups. Hot backup, also known as dynamic backup, is applied to data that remains available to users as the update is in process. This method sidesteps user downtime and productivity loss. The risk with hot backup is that, if the data is amended while the backup is under way, the resulting backup copy may not match the final state of the data. Techniques and technologies to complement data backup Continuous data protection (CDP) refers to layers of associated technologies designed to enhance data protection. A CDP-based storage system backs up all enterprise data whenever a change is made. CDP tools enable multiple copies of data to be created. Many CDP systems contain a built-in engine that replicates data from a primary to a secondary backup server and/or tape-based storage. Disk-to-disk-to-tape backup is a popular architecture for CDP systems. Near-continuous CDP takes backup snapshots at set intervals, which are different from array-based vendor snapshots that are taken each time new data is written to storage. Data reduction lessens your storage footprint. There are two primary methods: data compression and data deduplication. These methods can be used singly, but vendors often combine the approaches. Reducing the size of data has implications on backup windows and restoration times. Disk cloning involves copying the contents of a computer’s hard drive, saving it as an image file and transferring it to storage media. Disk cloning can be used for provisioning, system provisioning, system recovery, and rebooting or returning a system to its original configuration. Erasure coding, also known as forward error correction, evolved as a scalable alternative to traditional RAID systems. Erasure coding most often is associated with object storage. RAID stripes data writes across multiple drives, using a parity drive to ensure redundancy and resilience. The technology breaks data into fragments and encodes it with other bits of redundant data. These encoded fragments are stored across different storage media, nodes or geographic locations. The associated fragments are used to reconstruct corrupted data, using a technique known as oversampling. Flat backup is a data protection scheme in which a direct copy of a snapshot is moved to low-cost storage without the use of traditional backup software. The original snapshot retains its native format and location; the flat backup replica gets mounted, should the original become unavailable or unusable. Mirroring places data files on more than one computer server to ensure it remains accessible to users. In synchronous mirroring, data is written to local and remote disk simultaneously. Writes from local storage are not acknowledged until a confirmation is sent from remote storage, thus ensuring the two sites have an identical data copy. Conversely, asynchronous local writes are considered to be complete before confirmation is sent from the remote server. Replication enables users to select the required number of replicas, or copies, of data needed to sustain or resume business operations. Data replication copies data from one location to another, providing an up-to-date copy to hasten disaster recovery. Recovery in-place, or instant recovery, allows users to temporarily run a production application directly from a backup VM instance, thus maintaining data availability while the primary VM is being restored. Mounting a physical or VM instance directly on a backup or media server can hasten system-level recovery to within minutes. Recovery from a mounted image does result in degraded performance, since backup servers are not sized for production workloads. Storage snapshots capture a set of reference markers on disk for a given database, file or storage volume. Users refer to the markers, or pointers, to restore data from a selected point in time. Because it derives from an underlying source volume, an individual storage snapshot is an instance, not a full backup. As such, snapshots do not protect data against hardware failure. Snapshots are generally grouped in three categories: changed block, clones and CDP. Snapshots first appeared as a management tool within a storage array. The advent of virtualization added hypervisor-based snapshots. Snapshots may also be implemented by backup software or even via a VM. Copy data management and file sync and share Tangentially related to backup is copy data management (CDM). This is software that provides insight into the multiple data copies an enterprise might create. It allows discrete groups of users to work from a common data copy. Although technically not a backup technology, CDM allows companies to efficiently manage data copies by identifying superfluous or underutilized copies, thus reducing backup storage capacity and backup windows. File sync-and-share tools protect data on mobile devices used by employees. These tools basically copy modified user files between mobile devices. While this protects the data files, it does not enable users to roll back to a particular point in time should the device fail. Data backup: Variations on a theme When deciding which type of backup to use, you need to weigh several key considerations. It is not uncommon for an enterprise to mix various data backup approaches, as dictated by the primacy of the data. Your backup strategy should be governed by the SLAs that apply to an application, with respect to data access/availability, recovery time objectives and recovery point objectives. Your choice of backups also is influenced by the versatility of your backup application.

IoT, outages show importance of a cloud backup and recovery strategy

Cloud backup and recovery have long been a priority for enterprises running production workloads in the cloud. But, today, as trends like the internet of things spur massive amounts of data for organizations to store and protect, IT teams must evolve their cloud backup and recovery strategy — and make recovery a prime concern. “Data protection is changing in the world of cloud computing, as IoT comes into play [and] big data systems come into play,” says David Linthicum, SVP of Cloud Technology Partners, a cloud consulting firm based in Boston. “We have a lot more data to protect these days.” In this podcast, Linthicum speaks with Tarun Thakur, co-founder and CEO at Datos IO, a data protection software provider based in San Jose, Calif., about the latest cloud backup trends, the disaster recovery lessons learned from the Delta Air Lines outage in August, and why enterprise “cloud denial” is finally fizzling out. Three trends reshaping cloud backup and recovery As data from IoT devices and big data systems proliferates in the enterprise, there are a number of ways IT teams should adapt their cloud backup and recovery strategy. But three other trends, particularly in cloud storage, are causing organizations to rethink the way they perform cloud backups. First, strongly consistent databases, such as an Oracle or MySQL database, are being increasingly displaced by eventually consistent databases, Thakur says. These databases, such as Apache Cassandra, are also distributed systems that scale significantly more than legacy databases. Second, most next-generation infrastructures, such as those used by cloud providers, use local or distributed storage, rather than shared or file storage. And, third, these next-generation infrastructures are highly elastic, which has a major effect on data protection, according to Thakur. “If I created a backup of a 9-node cluster, but by the time I have to restore, my cluster size increases to 30 nodes — how do you manage the policy changes in terms of recovery?” he says. “At the end of the day, it’s all about recovery; backup is just a means to an end … and elasticity has a very big impact on how you think about recovery.” [5:40-8:35] What can we learn from the Delta outage? In August, Delta Air Lines suffered an outage that caused six hours of system downtime, in addition to flight cancellations and delays. The airline’s inability to successfully, and swiftly, failover to a backup data center was identified as one of the main triggers behind the disruption — causing some to question the airline’s backup strategy. “Ultimately, Delta had the responsibility of backing up their system and having an active-active redundancy in place, so if there was a failure of the primary [system], the secondary would be able to take over automatically — and that didn’t happen,” Linthicum says. Other major airlines, including United Airlines and JetBlue, were also affected by data center outages over the course of the past year. For IT pros, these outages present a learning opportunity. Thakur says, namely, don’t ever take “shortcuts” when it comes to building a cloud backup and recovery strategy. While it’s natural to want to evolve your IT infrastructure, and experiment with new technologies and services — including those in the cloud — “be very careful that, as you’re onboarding [these new services] … you do not compromise what is needed to keep your applications and business running all the time, [24/7],” he says. [9:30-16:00] Cloud denial fades away While cloud security and compliance concerns still exist in the enterprise, organizations, in general, are much quicker to embrace cloud computing services today than they were even a few years ago, Linthicum says. “Really, we’re getting out of the denial phase and into the operational phase, so people are moving into the cloud no matter what Global 2000 company you work for,” he says. “Cloud was there [before], typically as a shadow IT thing, but now it serves as a primary IT initiative, and so this seems to be the trend going forward.” Part of this shift, Thakur says, stems from the fact that cloud is moving beyond just lines of business departments — that, in some cases, would bypass IT to deploy cloud services — and becoming a main focus for IT operations teams. As that happens, it’s not just the line of business apps that organizations host in the cloud, but their core databases and enterprise applications as well. While security concerns and other factors still deter some organizations from making the move to cloud, it seems that adoption is not only becoming more common, but more formalized in the enterprise. And that trend, Thakur says, will likely continue. “There is so much ahead of us,” he says. “We are just touching the tip of the iceberg right now.” [17:15 – 20:00].

BACKUP AND RECOVERY PARTNER SHOWCASES

Dell EMC Button

Veeam Logo