I

XIOLOGIX DELIVERS VIRTUALIZED INFRASTRUCTURE SOLUTIONS

Virtualization enables Cloud computing because it makes it easier to move server workloads around. So whether you’re looking to virtualize your server infrastructure in your own data center, move some – or all – of your workloads into the Cloud, or just need some help in developing or refining your Cloud strategy, Xiologix can help you plan, implement, and support your virtualized infrastructure.

Virtualization

Virtualization enables Cloud computing because it makes it easier to move server workloads around. So whether you’re looking to virtualize your server infrastructure in your own data center, move some – or all – of your workloads into the Cloud, or just need some help in developing or refining your Cloud strategy, Xiologix can help you plan, implement, and support your virtualized infrastructure.

Vmw Top5 Reasons Infographic

Why You Can’t Aord Not to Virtualize

The VMware Cross-Cloud Architecture

See how VMware’s Cross-Cloud Architecture helps you avoid cloud silos, giving you both freedom and control in IT infrastructure.

www.youtube.com

VMware Cloud on AWS Overview

Currently in Technology Preview, VMware Cloud on AWS is a vSphere-based cloud service. The new service will bring our enterprise-class Software-Defined Data Center (SDDC) software to the AWS cloud.Customers will be able to run any application across vSphere-based private, public and hybrid cloud environments. It will be delivered, sold and supported by VMware as an on-demand, elastically scalable service and customers will be able to leverage the global footprint and breadth of services from AWS.The service will integrate the capabilities of our flagship compute, storage and network virtualization products (vSphere, Virtual SAN and NSX) along with vCenter management, and optimize it to run on next-generation elastic, bare metal, AWS infrastructure.This will enable customers to rapidly deploy secure, enterprise-grade AWS cloud-based resources that are operationally consistent with vSphere-based clouds. The result is a comprehensive turnkey service that works seamlessly with both on-premises private clouds and advanced AWS services.

www.youtube.com

1 + 1 = 3 : VMware + Veeam = Better Together

Ratmir Timashev, Veeam’s Chief Executive Officer and Carl Eschenbach, VMware’s President and COO discuss the partnership of their two companies and the strength of their services together on stage at VeeamON 2015.Learn more about VMware and Veeam at:https://www.veeam.com/vmware-vsphere-solutions.html

www.youtube.com

Runecast – What it is

Runecast – Software-defined Expertise for VMware vSphere based infrastructures

www.youtube.com

VMware Network Virtualization: The Story So Far with Bruce Davie

Bruce Davie, CTO of Networking, introduces VMware NSX network virtualization and discusses the use cases and success stories so far. Recorded at Tech Field Day Extra at Interop 2016 on May 4, 2016. For more information, please visit http://VMware.com/products/NSX/ or http://TechFieldDay.com/event/eilv16/

www.youtube.com

Brocade VNF Manager can prevent virtual network services sprawl

Brocade announced today the availability of its virtual network function (VNF) Manager. The product is a commercial version of OpenStack Tacker, an OpenStack lead project designed to make it easier to deploy and operate virtual network services. The initiative is compatible with the European Telecommunications Standards Institute (ETSI) Network Functions Virtualization (NFV) Framework.For those not familiar with NFV, the technology allows organizations to run network services as virtual functions instead of requiring a single appliance per function. NFV has many cost benefits, as it reduces the overall hardware that needs to be purchased and managed. More important, it gives network services the same level of agility as virtual servers and storage. Infrastructure agility is a core requirement of becoming a digital company, and NFV enables that at the network level.  ZK Research, 2015Brocade’s announcement of VNF Manager may not seem like an overly sexy announcement, but it is an important one for both service providers and enterprises. Historically, the topic of NFV has been primarily linked to network operators for use cases such as service chaining and on-demand service creation, but NFV is now something enterprises are looking at.In 2015, ZK Research conducted a survey asking businesses where they are in terms of enterprise NFV. Sixty-one percent were somewhere between the researching phase and deployment. The other 39 percent said they had no plans. But the industry is early in the technology cycle, so I certainly expect to see more organizations embrace NFV as it matures. ZK Research, 2015As NFV plans move from the testing phase into large-scale deployment, a product like Brocade’s VNF Manager becomes extremely important for the long-term manageability of the network services.To understand the potential problem, think back to the early days of server virtualization. Initially, the technology was used to consolidate severs. Instead of having 10 workloads on 10 servers, each using 5 percent of the overall capacity, run them all on one server and push the utilization of the single server to 50 percent.However, over time, server virtualization became increasingly popular for use cases other than consolidation. Application developers, Q/A departments and other groups started using virtualization as a faster way to deploy a server instead of having to physically procure hardware. For the infrastructure team, the self-service model seemed ideal because the groups that needed servers could just provision their own without having to order, deploy and connect a physical box.Over time, though, the expansion of virtual servers caused an unforeseen problem. So many virtual machines were being created that no one really knew how many virtual servers had been deployed, who owned them or even if they were still being used. I recall a conversation with a CTO from a mid-size bank who told me he had twice as many virtual servers in his company than he had physical servers prior to the deployment of VMware. The explosion of virtual servers was known as “virtual machine sprawl” and was a significant problem for years until VMware developed the tools necessary to manage large-scale virtual server environments. Avoiding NFV sprawlWithout a tool such as Brocade’s VNF Manager, network operations teams risk running into NFV sprawl as virtual network services proliferate across the company. The server industry had to go through significant pain before the management tools were developed, so it’s good to see Brocade being proactive about NFV management to prevent its customers from going through similar pain.Also, VNF Manager is bringing NFV together with software-defined networking (SDN). Customers can use VFN Manager to instantiate the virtual network functions and load them with an initial configuration. The VNFs can then be mounted and managed using Brocade’s SDN controller through its southbound interfaces. This enables the lifecycle of VFNs to be orchestrated and be aligned with SDN initiatives.VNF Manager is offered as a free download from the Brocade website. The download is a full-featured version that comes with a 60-day license, as well as 60 days of free technical assistance center (TAC) support to help customers get the product up and running.After the 60 days, customers will need to purchase either a one- or three-year license. Brocade offers different bundles for operation teams versus developers, as well as professional services and in-person training sessions to help ensure its customers are successful with the product.If you’re one of the 61 percent of organizations considering NFV, my recommendation is to be aggressive with the technology, as there are tremendous cost and operational benefits. Just make sure you have the proper management tools in place before deploying it.

Companies high on virtualization despite fears of security breaches

Companies are feeling more comfortable with the cloud, virtualization and even software defined data centers than ever before, despite their fears about security breaches, according to a study due out this month by technology companies HyTrust and Intel. While no one thinks security problems will go away, companies are willing to tolerate the risk in the name of agility, flexibility and lower costs.Some 62 percent of executives, network administrators and engineers surveyed expect more adoption of SDDC in 2016, which can quantifiably drive up virtualization and server optimization, while 65 percent predict that these implementations will be faster.Still, there are no illusions about security. A quarter of those surveyed say security will still be an obstacle, and 54 percent predict more breaches this year. In fact, security concerns are the No. 1 reason that 47 percent of respondents avoid virtualization, according to the report. They have good reason for concern. A single point of failure in a virtualized platform, such as a hack into the hypervisor software that sits just above the hardware and acts like a shared kernel for everything on top of it, has the potential to exploit an entire network, not just a single system.[ MORE VIRTUALIZATION: Virtualization doubles the cost of security breach ] “There’s a strong desire, especially by senior-level executives, to move forward with these projects because there are tangible benefits,” says Eric Chiu, president and co-founder of HyTrust. The opportunity to increase agility, revenues and profits trumps making the virtual environment safer, he adds.Meanwhile, in the IT department, staff tends to focus on what they know how to protect, not necessarily what they need to protect, according to a Kapersky Labs report. Only a third of organizations surveyed possess strong knowledge of the virtualized solutions that they use, and around one quarter have either a weak understanding of them or none at all.Dave Shackleford knows this all too well. He teaches a week-long course on virtualization and cloud security for the SANS Institute. By the end of the first day, he usually realizes that 90 percent of the students, a broad mix of system and virtualization/cloud administrators, network engineers and architects, have very little idea of exactly what they’re up against when it comes to securing virtual infrastructure.“You’ve got organizations out there that are 90 percent virtualized, which means your whole data center is running in a box out of your storage environment. Nobody is thinking about it this way,” says Shackleford, who is also CEO of Voodoo Security. “It’s not uncommon to go into even really big, mature enterprises and find an enormous number of security controls that they’re unaware of or being overlooked in one way or another” in the virtual environment, he adds. Adding to the confusion, virtualization has caused a shift in IT responsibilities in many organizations, says Greg Young, research vice president at Gartner. The data center usually includes teams trained in network and server ops, but virtualization projects are typically being led by the server team. “The network security issues are things they haven’t had to deal with before,” Young says.The average cost to remediate a data breach in a virtualized environment tops $800,000, according to Kapersky Labs, and remediation costs bring the average closer to $1 million – nearly double the cost of a physical infrastructure attack.Companies don’t see technology as the sole answer to these security problems just yet, according to the HyTrust survey. About 44 percent of survey-takers criticize the lack of solutions from current vendors, the immaturity of vendors or new vendor offerings, or issues with cross-platform interoperability. Even as vendors like Illumio, Catbird, CloudPassage and Bracket Computing emerge with fixes to some virtualization security problems, companies can’t afford to wait for the next security solution.“If you’re 50 percent virtualized today, in two years you’re going to be 70 percent to 90 percent virtualized, and it’s not going to get any easier to add security,” Shackleford says. “If you start moving things out to Amazon or Azure or any big cloud provider, you want to have your security at least thought through or ideally in place before you get there, where you’re going to have even less control than you may have had to date.”Four steps toward a more secure environmentThese security pros agree that companies can indeed have a secure virtual environment today if they can gain a clear picture of their virtual infrastructure, use some of the technology and security tools they already have, and better align technology and security in the organization.1. Get a grip on your virtual infrastructure“You can have very good security just through planning – taking the steps and making sure the safeguards are there,” Young says. This starts with inventory management. “The security team needs to get the lay of the land with regards to virtualization,” Shackleford says. “You need to try to get a handle on where hypervisors are, where management consoles are, what’s in-house, where it lives, and what the operational processes are around maintaining those. Next, define standards for locking them down. If nothing else, at least lock down the hypervisors,” Shackleford adds. Major vendors like VMware and Microsoft have guides to help you, as well as the Center for Internet Security.2. Rethink the way you look at data and storage.People seriously need to think about their environment as a set of files, Shackleford says. “It’s a very big shift for security professionals to realize that your whole data center runs from your SAN – your storage network. So they need to at least get familiar with the types of controls that they’ve put in place.”Vendors are also rethinking their security postures and welcoming third parties who can provide security fixes. “The problem before was, could I apply fine-grained network security to my virtualized environment, and in the past the network ops people said ‘absolutely not. We can’t support it,’ says Chris King, vice president in the networking and security business unit at VMware.“Now there are technologies available that will enable them to revisit that request and that can now cut the common thread in [these] breaches, which is once an attacker is inside, they’re stuck in that compartment and have to break through another wall in order to attack.”3. Encrypt the dataIt’s top of mind these days, but many companies are still not encrypting, Chiu says. “There’s this outdated thought process, which is ‘if it’s within my four walls, then I don’t need to worry about it,’ but that’s definitely not the case. You need to at least encrypt all customer data and all intellectual property wherever it is in your environment,” Chiu says. “Of course the cloud makes finding it worse because you don’t know for sure where that data is – but encrypting all that data should be a fundamental principle.”4. Coordinate security and infrastructure teams early on.There needs to be alignment and coordination between security and infrastructure teams at the beginning of virtualization projects, Chiu says. “It’s a lot easier to build in security controls and requirements in the beginning than to bolt something on later.”Security also needs to map the requirements of the organization for the next several years, he adds. “Does the company plan to virtualize PCI data, HC data, move to a shared environment where business units and application tiers are all going to get collapsed together? All those things matter because your requirements are going to be different.” This story, “Companies high on virtualization despite fears of security breaches” was originally published by CSO.

XenApp Essentials is now in the Azure Marketplace

Just a week ago, I wrote about the introduction of XenDesktop Essentials – which allows you to run virtual Windows 10 desktops in the Azure cloud if you meet certain criteria. Today, I noticed that the other shoe has dropped, and XenApp Essentials is available as well.This is the replacement for Azure RemoteApp, which Microsoft discontinued late last year. Note that this is not a full RDSH desktop – it is a means for delivering XenApp published applications from the Azure cloud. The monthly cost is $12/user, with a minimum of 25 users. You can either provide your own Windows and RDS CALs, or get them from the Azure marketplace as well for an additional $6.25/user/month. I suppose there’s no reason why you couldn’t use XenApp Essentials to deliver published applications to a Windows 10 virtual desktop running on XenDesktop Essentials, so you wouldn’t have to bake all of your applications into your Windows 10 VDI image, but that would effectively double your monthly cost, as XenDesktop Essentials is also $12/user/month, pushing your combined cost for both products to $24/user/month, or $288/user/year. So it probably doesn’t make financial sense, because the Citrix XenApp and XenDesktop Service – which gives you access to the full feature set of both products – is available from Citrix for $270/user/year…unless you just really want a month-to-month pricing model rather than paying for a year at a time. Citrix Cloud Computing Licensing Microsoft XenApp

Cloud services you can trust: Office 365 availability

Stay Informed “Your complete office in the cloud” is how we think of Microsoft Office 365. While it gives us enormous pride that one billion people use Office, we deeply appreciate the responsibility we have to meet and exceed our customers’ expectations every day. We recognize that productivity apps are mission critical; using them is how work gets done. It is imperative for us to ensure our service is trustworthy and reliable while we continue to add new capabilities to Office 365. Our measure for this is service availability.Office 365 availabilitySince launching Office 365 two years ago, we have continued to invest deeply in our infrastructure to ensure a highly available service.  While information has been available in detail for our current customers, today we’re making this information available to all customers considering Office 365.   We measure availability as the number of minutes that the Office 365 service is available in a calendar month as a percentage of the total number of minutes in that month.  We call this measure of availability the uptime number. Within this calculation we include our business, government and education services. The worldwide uptime number for Office 365 for the last four quarters beginning July 2012 and ending June 2013 has been 99.98%, 99.97%, 99.94% and 99.97% respectively.  Going forward we will disclose uptime numbers on a quarterly basis on the Office 365 Trust Center.Here are a few more details about the uptime number:The uptime number includes Exchange, SharePoint, Lync and Office Web Apps, weighted on the number of people using each of these services. Customers use these services together, so all of these are taken into account while calculating uptime.This uptime number applies to Office 365 for business, education and government. We do not include consumer services in this calculation.Office 365 ProPlus is an integral part of our service offering but is not included in this calculation of uptime since it largely runs on the users’ devices. Individual customers may experience higher or lower uptime percentages compared to the global uptime numbers depending on location and usage patterns.As a commitment to running a highly available service, we have a Service Level Agreement of 99.9% that is financially backed. Availability design principlesWe have been building enterprise-class solutions for decades. In addition, Microsoft runs a number of cloud services like Office 365, Windows Azure, CRM Online, Outlook.com, SkyDrive, Bing, Skype and Xbox Live to  name a few. We benefit from this diversity of services, leveraging best practices from each service across the others improving both the design of the software as well as operational processes. Below are some examples of best practices applied in design and operational processes for Office 365.Redundancy. Redundancy at every layer–physical, data and functional:We build physical redundancy at the disk/card level within servers, the server level within a datacenter and the service level across geographically separate data centers to protect against failures. Each data center has facilities and power redundancy. We have multiple datacenters serving every region.To build redundancy at the data level, we constantly replicate data across geographically separate datacenters. Our design goal is to maintain multiple copies of data whether in transit or at rest and failover capabilities to enable rapid recovery.In addition to the physical and data redundancy, as one of our core strengths we build Office clients to provide functional redundancy to enable you to be productive using offline functionality when there is no network connectivity.Resiliency. Active load balancing and constant recovery testing across failure domains: We actively balance load to provide end users the best possible experiences in an automated manner. These mechanisms also dynamically prioritize, performing low priority tasks during low activity periods and deferring them during high load.We have both automated and manual failover to healthy resources during hardware or software failures and monitoring alerts.We routinely perform recovery across failure domains to ensure readiness for circumstances require failovers.Distributed Services. Functionally distributed component services:The component services in Office 365 like Exchange, SharePoint, Lync and Office Web Apps are functionally distributed, ensuring that the scope and impact of failure in one area is limited to that area alone and not impact others.We replicate directory data across these component services so that if one service is experiencing an issue, users are able to login and use other services seamlessly.Our operations and deployment teams benefit from the distributed nature of our service, simplifying all aspects of maintenance and deployment, diagnostics, repair and recovery.Monitoring. Extensive monitoring, recovery and diagnostic tools:Our internal monitoring systems continuously monitor the service for any failure and are built to drive automated recovery of the service. Our systems analyze any deviations in service behavior to alert on-call engineers to take proactive measures.We also have Outside-In monitoring constantly executing from multiple locations around the world both from trusted third party services (for independent SLA verification) and our own worldwide datacenters to raise alerts.For diagnostics, we have extensive logging, auditing, and tracing. Granular tracing and monitoring helps us isolate issues to root cause.Simplification. Reduced complexity drives predictability:We use standardized components wherever possible. This leads to fewer deployment and issue isolation complexities as well as predictable failures and recovery. We use standardized process wherever possible. The focus is not only on automation but making sure that critical processes are repeated and repeatable. We have architected the software components to be loosely coupled so that their deployment and ongoing health don’t require complex orchestration.Our change management goes through progressive, staged, instrumented rings of scope and validation before being deployed worldwide. Human back-up. 24/7 on-call support:While we have automated recovery actions where possible, we also have a team of on-call professionals standing by 24×7 to support you. This team includes support engineers, product developers, program managers, product managers and senior leadership.With an entire team on call, we have the ability to provide rapid response and information collection towards problem resolution.Our on-call professionals while providing back-up, also improve the automated systems every time they are called to help.Continuous learningWe understand that there will be times when you may experience service interruptions. We do a thorough post-incident review every time an incident occurs regardless of the magnitude of impact. A post-incident review consists of an analysis of what happened, how we responded and how we prevent similar incidents in the future. In the interest of transparency and accountability, we share post-incident review for any major service incidents if your organization was affected. As a large enterprise, we also “eat our own dogfood,” i.e., use our own pre-production service to conduct day-to-day business here at Microsoft. Continuous improvement is a key component to provide a highly available, world-class service.Consistent communicationTransparency requires consistent communication, especially when you are using online productivity services to conduct your business. We have a number of communication channels such as email, RSS feeds and the Service Health Dashboard. As an Office 365 customer, you get a detailed view into the availability of services that are relevant to your organization. The Office 365 Service Health Dashboard is your window into the current status of your services and your licenses. We continue to drive improvements into the Service Health Dashboard including tracking timeliness of updates to ensure so that you have full insight into your services health.We also have some exciting new tools to improve your ability to stay up to date with the service.  Last week we released a new feature in the administration portal called “Message Center.” Message Center is a central hub for service communications, tenant reporting and actions required by administrators.  Also, by the end of this year, administrators can expect a new mobile app that will provide service health information as well as other communications regarding their service. Running a comprehensive and evolving service at ever increasing scale is a challenge and there will be service interruptions despite our efforts. We want to assure you that we are continually learning and are relentless in our commitment to provide you with a reliable highly available service that meets your expectations.  Service continuity is more than an engineering principle it is a commitment to customers in our SLA and as one of the key pillars of Office 365 Trust Center (the other four pillars being Privacy, Security, Compliance and Transparency). This public disclosure of Office 365 uptime is evidence of our ongoing commitment to both Service Continuity and Transparency.

VIRTUALIZATION AND CLOUD SERVICES PARTNER SHOWCASES

Citrix Logo

runecast logo

Veeam Logo

VMware logo