XIOLOGIX BACKUP AND RECOVERY FOR “JUST IN CASE”
Our services go well beyond traditional disaster relief. We help companies build resilience into every layer of their business by anticipating the potential impact of a wide range of threats. Xiologix will guide you in ensuring that you have the appropriate backup and recovery solution for your business needs.
Backup and Recovery
Our services go well beyond traditional disaster relief. We help companies build resilience into every layer of their business by anticipating the potential impact of a wide range of threats. Xiologix will guide you in ensuring that you have the appropriate backup and recovery solution for your business needs.
Veeam – Technical Advantages For The Enterprise Customers
An overview of the disaster and data recovery solutions Veeam offers for VMware and Hyper-V environments and Veeam’s unique feature set.
Why Data Domain
Deduplication solutions are not all create equal, why choose Data Domain?
All about backup: Why businesses choose the cloud
All about backup: Why businesses choose the cloudThere’s no one reason why companies move to the cloud. You hear many reasons: cost is often mentioned, although a bit less these days. You often hear security cited, as providers have more to spend on technical excellence. And you certainly hear flexibility mentioned; it’s probably the main reason at present. What’s not generally spoken about is the ease of disaster recovery, yet this is something that touches on all the elements mentioned above. By replicating virtual machines into a private cloud, you’re reducing costs, certainly compared with the cost of installing physical hardware. It’s a flexible solution, as it can scale to cope with unexpected demands. And finally, by using a variety of security tools on the network, you can ensure that your data is safe from prying eyes.Many businesses have discovered these advantages. According to the ZDNet cloud priorities survey, a third of organisations said backup and recovery was the most important infrastructure as a service (IaaS) function to them; it was the leading choice.But that’s not the whole story; cloud on its own doesn’t cut it. Smart companies use a hybrid approach to maximise performance and minimise costs. In this scenario, mission-critical data is backed up on-premise and then, automatically, transferred to the cloud over time. A hybrid product like Microsoft’s StorSimple can support this approach, transferring data to the most appropriate medium. In this case, the decision-making is taken away from the CIO, and the solution is both cost-effective and efficient. It also means there is no need to calculate how much capacity is needed – that in itself gets rid of a major headache for the CIO.Recent research supports the rapid uptake of cloud backup. An April 2015 survey by Forrester found that 56 percent of companies were backing up tier-2 applications at least once a day, a rate that has doubled in the past three years.In the end it’s the hybrid approach that is going to bring in dividends: it’s not a case of all public or all private but the best fit. Forrester points out that companies with large databases which require frequent backups should stick with on-premise solutions while medium-sized tier-2, tier-2, and tier-3 datasets could all be moved to the public cloud.
Don’t let an MBA near your disaster recovery plan
When you work at an entity that collects sensitive, real-time data and is responsible for keeping it up-to-date and available to certain public institutions, you’d think a solid backup and disaster recovery plan would be high on the list of organizational priorities. In theory, yes — but all it takes is one hotshot manager to break what didn’t need to be fixed in the first place.With this corporate body, branch offices were located in cities throughout several states, and at one time each office maintained its own semi-autonomous IT infrastructure. The sites had their own redundant file servers, database servers, and authentication servers, as well as on-premises IT staff.One day a new IT director, “Julius,” showed up. He was an MBA who had saved a string of companies lots of money by virtualizing their server infrastructures. While he had lots of experience working with relatively small companies, his experience with large enterprises spread across wide geographic areas was limited.Virtualization is of course a great way to get more efficiency from your servers and add a level of flexibility that was never available before, but unfortunately Julius ignored some fundamentals of business continuity in his infrastructure design. Having all of your eggs in one basket can make them a lot easier to carry — but you know how that cliche works out. Part of the problem or part of the solution?In his first week in the new role, Julius held a meeting with all of the IT managers and laid out his grand vision for the new server infrastructure. Instead of each site having its own small server farm, they would all be centralized in the business office’s data center.As the meeting went on, manager reactions began to follow a pattern: The greater their technical expertise, the greater their discomfort with the changes. The biggest concerns brought up: Will the virtual servers have sufficient performance to keep up with the individual sites’ needs? Is there enough bandwidth to serve all the satellite offices? Also, what happens if the central office’s data center was unavailable?Julius brushed the questions aside with platitudes and jargon, “This is a great opportunity to synergize our infrastructure and reap the benefits of increased operational efficiencies.” Finally, with a note of frustration in his voice, he stopped the discussion and simply warned, “This is happening, so are you going to be part of the problem or part of the solution?”Despite the managers’ concern, Operation Egg Basket proceeded. Several beefy servers were purchased and set up with a common virtualization platform. One at a time, the individual sites’ servers were virtualized, except for the domain controllers, and the old equipment was decommissioned. There were some performance issues, but they were addressed by tweaking the hypervisor. There were also bandwidth issues, but QoS, traffic filtering, and bandwidth upgrades took care of them. After about a year, the job was done, and Julius patted himself on the back for another successful virtualization rollout. For months everything seemed to work great — until it didn’t.First the disaster, then the recoveryCome spring of that year, a violent thunderstorm rolled through and a tornado touched down a mile away from the central business office. The electrical and telephone poles were flattened like grass in a lawn mower, taking out all related service in the area.The data center had a giant backup generator, so the power loss was no big deal — until someone realized that the diesel tank was almost empty. That was easily rectified by some urgent phone calls, although this was a significant detail to have overlooked.However, the real problem was the loss of the fiber optic link to the data center. All network traffic in the company was configured to route through the central office, so the satellite offices lost access to needed services. They couldn’t even get out to the Internet because the proxy server was at the central office. Most of the VoIP telephones were down in the enterprise, as was voicemail: No file servers, no application servers, no databases, nothing.For the better part of two days, while the phone company scrambled to get the fiber optic lines back up, the whole company remained down. Workers still had to report to their offices because lots of manual assignments needed to be done, but it was now much harder and slower to do. Very likely, a ton of work simply went undocumented. Finally, the phone company reestablished the lines, and everything started functioning again.A silver liningAfter this incident, Julius saw the writing on the wall and graciously departed for another position at another company — probably peddling his specialty again, but hopefully a bit wiser.A new manager who specialized in disaster recovery was brought in, and the infrastructure was overhauled once again, this time to ensure redundancy and resilience by eliminating single points of failure. A hot backup data center was brought online in case the primary went away, and the most critical systems were placed back in the individual satellite offices again.Ultimately, there was an upside to the fiasco. We ended up with a highly resilient infrastructure that properly utilized virtualization while maintaining the other fundamentals of business continuity. Namely: Don’t keep all your eggs in one basket!
How to make cloud backup a no-brainer
The 3-2-1 rule is simple and easy to remember for your backup strategy: You want three copies of your data, in two different formats, with one of those copies offsite. Having three copies ensures redundancy (because they should be in different locations and formats). Using different formats goes back to the tape/disk/optical mindset of the past.Many organizations have moved away from tape as primary backup to hard disk, using tape as a secondary. But cloud backups are becoming the new normal for the 3-2-1 rule where you back up first to disk and then to the cloud.The idea of backing up with a cloud storage repository as one of those three copies is beneficial in theory, providing an offsite copy of your data should you need it. And all without tapes or drives sent to an offsite facility. Instead, the backup goes right over your Internet connection to a cloud vendor’s storage.But the reality is not always so rosy. The cost for putting your backups into a hosted cloud repository could be prohibitive. You may be dealing with limited bandwidth, and even if you can increase it o cover your storage volume, doing so comes at more cost. If you host your Infrastructure in the cloud, don’t think you’re backed up. You’re not. So you really need to ensure you have redundancy of your data, no matter where it resides.There are a ton of cloud backup options out there, some of which offer simple cloud storage that you can just dump raw data into, and some provide the kind of backup and recovery capabilities familiar in the data center. Don’t make the mistake of thinking all cloud backup offerings are the same. You have to do your due diligence and find a solution that works for you.You may in fact be looking for two different solutions: one to handle the backup and the other to hold the backup. For example, you may want to use Microsoft Azure for your storage provider with a front-end tool like Veeam Cloud Connect to manage the backup of VMs.Veeam Cloud Connect offers an SSL backup (deduplicated and compressed) to a partner service provider (Azure, for example) and optionally includes WAN acceleration to help offset the bandwidth issue. It also offers “forever incremental backups” (deltas) that allow for less data to travel to the cloud. You can also do traditional grandfather-father-son (GFS) backup. End-to-end encryption of the data ensures security in transit and in storage. Commvault’s Unified Cloud Data Protection offering is similar.Then there’s Nasuni, whose hybrid cloud storage offering combines local storage controllers and cloud storage..No matter what provider’s approach you favor, it’s important to have a clear understanding of pricing (including storage costs and fees for storing and retrieving your data), the features offered, and overall data life cycle management (if you choose to index the data for easy retrieval).With that information in place, backing up up to the cloud as part of your 3-2-1 strategy is as easy as 1-2-3.
Better back up and recovery at Syracuse University
This article first appeared in the Spring 2016 issue of OnWindows.Located in the US state of New York, Syracuse University (SU) is comprised of some 22,000 students, faculty and staff. When the university decided to consolidate around 20 small data centres and build a new, modern data centre with a private cloud infrastructure, challenges around providing enterprise-scale backup and disaster recovery remained. The university required a unified virtual machine (VM) back up and recovery solution that would enable 24/7 availability among its thousands of VMs and scale to support its data growth.Previously, the team at SU were only able to back up its most critical VMs, and were limited to once or twice a week. Furthermore, one in every ten backup jobs was unsuccessful. SU’s 22,000 students, faculty and staff require around the clock access to files, databases and applications. The university’s modern data centre also delivers infrastructure-as-a-service (IaaS) to business and academic units, and has specific availability requirements.“Our IaaS offering focuses on providing networking and computing resources,” said Josh Slade, virtual and back up environment lead at Syracuse University. “Schools, colleges and business units drop VMs into our cloud with the expectation that we will provide data protection. We needed a back up solution that leveraged changed block tracking to reduce the amount of data undergoing back up, supported Microsoft Volume Shadow Copy Service (VSS) to ensure successful VM recovery and was scalable to grow with us.”SU deployed the Veeam Backup & Replication solution and, as a result, data centre administrators are now seeing a range of benefits.As well as the user interface, the solution provides a Windows PowerShell extension that enables SU to create a set of scripts to automate back up and recovery of its 2,300 VMs within its VMware vSphere environment.A recent disaster recovery test saw 1,500 VMs successfully recovered within 24 hours, leading to the SU estimating that an entire data centre could be restored in a similar timeframe.SU uses nightly incremental Veeam backups and sends them to virtual repositories in a secondary data centre. Currently, the virtual repositories hold two petabytes of data and 30 retention points. As part of the disaster recovery plan, SU has created a repository of custom scripts to restore every VM from the Veeam repositories to waiting storage and hosts in the secondary data centre.Veeam uses application-aware image processing to create transactionally consistent backups of VMs running VSS-aware applications such as Microsoft Active Directory, Exchange, SharePoint and SQL Server for SU.Thanks to Veeam’s technology, SU has the scalability and flexibility to support explosive data growth, and is benefitting from 98% faster backup and a high availability for DR.“We’ve used Veeam for so long that we assume backup and recovery will be successful, and it is,” said Peter Pizzimenti, IT analyst at SU. “There’s nothing else on the market that we were able to find that could work in an infrastructure our size with the approach we have taken except Veeam.”
Two Challenges Facing Data Backup/Disaster Recovery Market
Backup and disaster recovery services have grown into a lucrative business space over the past 15 years, but the combination of increased access to data as well as a 24/7 interconnected work world have complicated the mission goals of solution providers.“We want to build a world without technology interruptions for your businesses, for your IT, and for customers,” said Boston-based Zerto founder and CEO Ziv Kedem.Kedem said that mission statement sounds simple, but “this is a difficult problem, and it actually is becoming more difficult as IT is changing.”One factor challenging these services is the sheer increase in the volume of data.“Every business understands the value of data, the value of applications,” Kedem said.However, those data and applications highly increase and complicate the workload for even simple backup and recovery plans.Secondly, Kedem said, “You expect more from your applications.” “Customers expect that [applications] need to be always available, always responsive, always on, always updated,” he said.That leaves little room for error or downtime, especially when a company is responding to a major disaster.
8 ingredients of an effective disaster recovery plan
Earlier this month, a monkey caused a nationwide power outage in Kenya. Millions of homes and businesses were without electricity. Which just goes to show that “not all disasters come in the form of major storms with names and categories,” says Bob Davis, CMO, Atlantis Computing.“Electrical fires, broken water pipes, failed air conditioning units [and rogue monkeys] can cause just as much damage,” he says. And while “business executives might think they’re safe based on their geographic location,” it’s important to remember that “day-to-day threats can destroy data [and] ruin a business,” too, he says. That’s why it is critical for all businesses to have a disaster recovery (DR) plan.[ Related: A guide to disaster recovery planning ]However, not all DR plans are created equal. To ensure that your systems, data and personnel are protected and your business can continue to operate in the event of an actual emergency or disaster, use the following guidelines to create a disaster plan that will help you quickly recover. 1. Inventory hardware and software. Your DR plan should include “a complete inventory of [hardware and] applications in priority order,” says Oussama El-Hilali, vice president of Products for Arcserve. “Each application [and piece of hardware] should have the vendor technical support contract information and contact numbers,” so you can get back up and running quickly.2. Define your tolerance for downtime and data loss. “This is the starting point of your planning,” says Tim Singleton, president, Strive Technology Consulting. “If you are a plumber, you can probably be in business without servers or technology [for] a while. [But] if you are eBay, you can’t be down for more than seconds. Figuring out where you are on this spectrum will determine what type of solution you will need to recover from a disaster.”“Evaluate what an acceptable recovery point objective (RPO) and recovery time objective (RTO) is for each set of applications,” advises says David Grimes, CTO, NaviSite. “In an ideal situation, every application would have an RPO and RTO of just a few milliseconds, but that’s often neither technically nor financially feasible. By properly identifying these two metrics businesses can prioritize what is needed to successfully survive a disaster, ensure a cost-effective level of disaster recovery and lower the potential risk of miscalculating what they’re able to recover during a disaster.”“When putting your disaster recovery plan in writing, divide your applications into three tiers,” says Robert DiLossi, senior director, Testing & Crisis Management, Sungard Availability Services. “Tier 1 should include the applications you need immediately. These are the mission-critical apps you can’t do business without. Tier 2 covers applications you need within eight to 10 hours, even up to 24 hours. They’re essential, but you don’t need them right away. Tier 3 applications can be comfortably recovered within a few days,” he explains. “Defining which applications are most important will aid the speed and success of the recovery. But most important is testing the plan at least twice per year,” he says. “The tiers might change based on the results, which could reveal unknown gaps to fill before a true disaster.”3. Lay out who is responsible for what – and identify backup personnel. “All disaster recovery plans should clearly define the key roles, responsibilities and parties involved during a DR event,” says Will Chin, director of cloud services, Computer Design & Integration. “Among these responsibilities must be the decision to declare a disaster. Having clearly identified roles will garner a universal understanding of what tasks need to be completed and who is [responsible for what]. This is especially critical when working with third-party vendors or providers. All parties involved need to be aware of each other’s responsibilities in order to ensure the DR process operates as efficiently as possible.”“Have plans for your entire staff, from C-level executives all the way down, and make sure they understand the process,” and what’s expected of them, says Neely Loring, president, Matrix, which provides cloud-based solutions, including Disaster-Recover-as-a-Service. “This gets everyone back on their feet quicker.”“Protocols for a disaster recovery (DR) plan must include who and how to contact the appropriate individuals on the DR team, and in what order, to get systems up and running as soon as possible,” adds Kevin Westenkirchner, vice president, operations, Thru. “It is critical to have a list of the DR personnel with the details of their position, responsibilities [and emergency contact information].”“One final consideration is to have a succession plan in place with trained back-up employees in case a key staff member is on vacation or in a place where they cannot do their part [or leaves the company],” says Brian Ferguson, product marketing manager, Digium. 4. Create a communication plan. “Perhaps one of the more overlooked components of a disaster recovery plan is having a good communication plan,” says Mike Genardi, solutions architect, Computer Design & Integration. “In the event a disaster strikes, how are you going to communicate with your employees? Do your employees know how to access the systems they need to perform their job duties during a DR event? “Many times the main communication platforms (phone and email) may be affected and alternative methods of contacting your employees will be needed,” he explains. “A good communication plan will account for initial communications at the onset of a disaster as well as ongoing updates to keep staff informed throughout the event.”“Communication is critical when responding to and recovering from any emergency, crisis event or disaster,” says Scott D. Smith, chief commercial officer at ModusLink. So having “a clear communications strategy is essential. Effective and reliable methods for communicating with employees, vendors, suppliers and customers in a timely manner are necessary beyond initial notification of an emergency. Having a written process in place to reference ensures efficient action post-disaster and alignment between organizations, employees and partners.” Page 2 of 2 “A disaster recovery plan should [also] include a statement that can be published on your company’s website and social media platforms in the event of an emergency,” adds Robert Gibbons, CTO, Datto, a data protection platform. And be prepared to “give your customers timely status updates on what they can expect from your business and when. If your customers understand that you are aware of the situation, you are adequately prepared and working to take care of it in a timely manner, they will feel much better.”[ Related: Disaster recovery in a DevOps world ]5. Let employees know where to go in case of emergency – and have a backup worksite. “Many firms think that the DR plan is just for their technology systems, but they fail to realize that people (i.e., their employees) also need to have a plan in place,” says Ahsun Saleem, president, Simplegrid Technology. “Have an alternate site in mind if your primary office is not available. Ensure that your staff knows where to go, where to sit and how to access the systems from that site. Provide a map to the alternate site and make sure you have seating assignments there.”“In the event of a disaster, your team will need an operational place to work, with the right equipment, space and communications,” says DiLossi. “That might mean telework and other alternative strategies need to be devised in case a regional disaster causes power outages across large geographies. Be sure to note any compliance requirements and contract dedicated workspace where staff and data can remain private. [And] don’t contract 50 seats if you’ll really need 200 to truly meet your recovery requirements.”6. Make sure your service-level agreements (SLAs) include disasters/emergencies. “If you have outsourced your technology to an outsourced IT firm, or store your systems in a data center/co-location facility, make sure you have a binding agreement with them that defines their level of service in the event of a disaster,” says Saleem. “This [will help] ensure that they start working on resolving your problem within [a specified time]. Some agreements can even discuss the timeframe in getting systems back up.”7. Include how to handle sensitive information. “Defining operational and technical procedures to ensure the protection of…sensitive information is a critical component of a DR plan,” says Eric Dieterich, partner, Sunera. “These procedures should address how sensitive information will be maintained [and accessed] when a DR plan has been activated.”8. Test your plan regularly. “If you’re not testing your DR process, you don’t have one,” says Singleton. “Your backup hardware may have failed, your supply chain may rely on someone incapable of dealing with disaster, your internet connection may be too slow to restore your data in the expected amount of time, the DR key employee may have changed [his] cell phone number. There are a lot of things that may break a perfect plan. The only way to find them is to test it when you can afford to fail.”“Your plan must include details on how your DR environment will be tested, including the method and frequency of tests,” says Dave LeClair, vice president, product marketing, Unitrends, a cloud-based IT disaster recovery and continuity solution provider. “Our recent continuity survey of 900 IT admins discovered less than 40 percent of companies test their DR more frequently than once per year and 36 percent don’t test at all. “Infrequent testing will likely result in DR environments that do not perform as required during a disaster,” he explains. “Your plan should define recovery time objective (RTO) and recovery point objective (RPO) goals per workload and validate that they can be met. Fortunately, recovery assurance technology now exists that is able to automate DR testing without disrupting production systems and can certify RTO and RPO targets are being met for 100 percent confidence in disaster recovery even for complex n-tier applications.” Also keep in mind that “when it comes to disaster recovery, you’re only as good as your last test,” says Loring. “A testing schedule is the single most important part of any DR plan. Compare your defined RTO and RPO metrics against tested results to determine the efficacy of your plan. The more comprehensive the testing, the more successful a company will be in getting back on their feet,” he states. “We test our generators weekly to ensure their function. Always remember that failing a test is not a bad thing. It is better to find these problems early than to find them during a crisis. Decide what needs to be modified and test until you’re successful.”And don’t forget about testing your employees. “The employees that are involved need to be well versed in the plan and be able to perform every task they are assigned to without issue,” says Ferguson. “Running simulated disasters and drills help ensure that your staff can execute the plan when an actual event occurs.” This story, “8 ingredients of an effective disaster recovery plan” was originally published by CIO.
What is backup and recovery?
IoT, outages show importance of a cloud backup and recovery strategy
Cloud backup and recovery have long been a priority for enterprises running production workloads in the cloud. But, today, as trends like the internet of things spur massive amounts of data for organizations to store and protect, IT teams must evolve their cloud backup and recovery strategy — and make recovery a prime concern. “Data protection is changing in the world of cloud computing, as IoT comes into play [and] big data systems come into play,” says David Linthicum, SVP of Cloud Technology Partners, a cloud consulting firm based in Boston. “We have a lot more data to protect these days.” In this podcast, Linthicum speaks with Tarun Thakur, co-founder and CEO at Datos IO, a data protection software provider based in San Jose, Calif., about the latest cloud backup trends, the disaster recovery lessons learned from the Delta Air Lines outage in August, and why enterprise “cloud denial” is finally fizzling out. Three trends reshaping cloud backup and recovery As data from IoT devices and big data systems proliferates in the enterprise, there are a number of ways IT teams should adapt their cloud backup and recovery strategy. But three other trends, particularly in cloud storage, are causing organizations to rethink the way they perform cloud backups. First, strongly consistent databases, such as an Oracle or MySQL database, are being increasingly displaced by eventually consistent databases, Thakur says. These databases, such as Apache Cassandra, are also distributed systems that scale significantly more than legacy databases. Second, most next-generation infrastructures, such as those used by cloud providers, use local or distributed storage, rather than shared or file storage. And, third, these next-generation infrastructures are highly elastic, which has a major effect on data protection, according to Thakur. “If I created a backup of a 9-node cluster, but by the time I have to restore, my cluster size increases to 30 nodes — how do you manage the policy changes in terms of recovery?” he says. “At the end of the day, it’s all about recovery; backup is just a means to an end … and elasticity has a very big impact on how you think about recovery.” [5:40-8:35] What can we learn from the Delta outage? In August, Delta Air Lines suffered an outage that caused six hours of system downtime, in addition to flight cancellations and delays. The airline’s inability to successfully, and swiftly, failover to a backup data center was identified as one of the main triggers behind the disruption — causing some to question the airline’s backup strategy. “Ultimately, Delta had the responsibility of backing up their system and having an active-active redundancy in place, so if there was a failure of the primary [system], the secondary would be able to take over automatically — and that didn’t happen,” Linthicum says. Other major airlines, including United Airlines and JetBlue, were also affected by data center outages over the course of the past year. For IT pros, these outages present a learning opportunity. Thakur says, namely, don’t ever take “shortcuts” when it comes to building a cloud backup and recovery strategy. While it’s natural to want to evolve your IT infrastructure, and experiment with new technologies and services — including those in the cloud — “be very careful that, as you’re onboarding [these new services] … you do not compromise what is needed to keep your applications and business running all the time, [24/7],” he says. [9:30-16:00] Cloud denial fades away While cloud security and compliance concerns still exist in the enterprise, organizations, in general, are much quicker to embrace cloud computing services today than they were even a few years ago, Linthicum says. “Really, we’re getting out of the denial phase and into the operational phase, so people are moving into the cloud no matter what Global 2000 company you work for,” he says. “Cloud was there [before], typically as a shadow IT thing, but now it serves as a primary IT initiative, and so this seems to be the trend going forward.” Part of this shift, Thakur says, stems from the fact that cloud is moving beyond just lines of business departments — that, in some cases, would bypass IT to deploy cloud services — and becoming a main focus for IT operations teams. As that happens, it’s not just the line of business apps that organizations host in the cloud, but their core databases and enterprise applications as well. While security concerns and other factors still deter some organizations from making the move to cloud, it seems that adoption is not only becoming more common, but more formalized in the enterprise. And that trend, Thakur says, will likely continue. “There is so much ahead of us,” he says. “We are just touching the tip of the iceberg right now.” [17:15 – 20:00].