XIOLOGIX DELIVERS STORAGE SOLUTIONS WITH REAL VALUE
Storage and systems choices are made balancing a complex and sometimes even conflicting set of priorities. Partnering with Xiologix will ensure that you have a partner to guide you through the maze of options and make the right choice for the unique needs of your organization.
Storage and systems choices are made balancing a complex and sometimes even conflicting set of priorities. Partnering with Xiologix will ensure that you have a partner to guide you through the maze of options and make the right choice for the unique needs of your organization.&nbsp;
Josh Epstein & Dani Golan, Kaminario – #VMworld – #theCUBE
Josh Epstein, VP of Global Marketing at Kaminario & Dani Golan, Founder & CEO of Kaminario sit down with host Stu Miniman at VMworld 2016, at the Mandalay Bay in Las Vegas, Nevada.
Making flash scale-out storage easier for end users | #VMworld
Rami Katz, VP of Product Management at EMC, sits down with hosts John Furrier and John Walls at VMworld 2016 at the Mandalay Bay in Las Vegas, NV.
Modernize and Simplify with EMC Unity Storage
http://emc.im/6055BUhSBEMC Unity™ simplifies and modernizes today’s datacenter with a powerful combination of enterprise capabilities and cloud-like simplicity for the midrange, in a compact, 2U all-flash design, running the latest Intel processors and latest Linux codebase.The EMC Unity Operating Environment is the most flexible in the industry and can be deployed in many configurations – converged Vblock or VxBlock systems, a dedicated storage array; or as a software-defined virtual storage appliance. Watch the video to find out more.
The Sequel | XtremIO
This blog was written by Chhandomay Mandal, Solutions Marketing Director for EMC All-Flash StorageLast year, the ESG analysts studied the 5-year TCO and business impact of consolidating production, development, test/QA, go-live testing as well as VDI environments of a mid-market software development company generating $54M in annual revenue on to a single XtremIO X-Brick.The analysis contrasted XtremIO benefits against the traditional hybrid storage arrays (dedicated flash and disk-based systems) that the company originally had. The study showed XtremIO offering a 5-year storage TCO savings of $2.8M along with an additional $3.2M in business impact, for a total of $6M advantage.Often times we get the question: This is exceptionally good, but aren’t modern era all-flash arrays designed to solve the same business challenges? Don’t they all offer same advantages?Of course we know that this isn’t true. Architecture matters – the design choices made in architecting an all-flash array significantly affect how efficiently it can solve modern data center challenges. We explain the differences between XtremIO and other leading all-flash arrays from different vendors, but customers like to see a quantified result.To better understand the effects of all-flash design choices translating to business values, we requested ESG to analyze four other market-leading all-flash arrays from different vendors for the same scenario.ESG has now published their findings in a new TCO study where they used the same modeled development organization and did not change any of the model assumptions from the original report when comparing the TCO of XtremIO to all-flash array offerings from each of these other four vendors.Although EMC has launched a complete portfolio of flash solutions with VMAX All Flash, Unity and DSSD in 2016, the new report is focused on only XtremIO that was examined in the previous study.In the new study, ESG referred the other companies as Vendors A through D. Each provided significant TCO and economic business advantages over the traditional storage system that ESG analyzed in 2015. However, ESG found that all of these other all-flash offerings were designed to optimize a few strategic advantages, and did not offer the entire spectrum of benefits that XtremIO did.For example, Vendor A was designed to be simple to administer, simple to deploy, and space-efficient. However, the active/passive scale-up design, traditional data protection, and redirect-on-write snapshot technology meant more hardware had to be deployed to satisfy the requirements, and the process of dealing with copies of data was simplified, but not optimized.Vendor B offered a very economical scale-out design using commodity hardware and advanced QoS, but also had to deploy more hardware, and limited redirect-on-write read-only snapshots meant that workarounds had to be employed to manage copies of data.ESG found that the flash array by Vendor C provided good performance. However, the need to integrate an external storage virtualization solution to provide copy-on-write-based data services greatly reduced its performance capabilities and increased the complexity of the solution.Finally, Vendor D lacked the management capabilities of the other products and could not provide the predictable performance that is required in a modern data center. The lack of copy data management functionality caused ESG to suggest that it would significantly lengthen the time taken to create, manage and roll forward/back copies for test, development, QA and go-live operations.The net result? It was self-evident when ESG compared all five all-flash offerings side by side. As ESG explained, the ability to satisfy the requirements of the organization with less physical hardware generally meant lower costs for acquisition, support and maintenance, and power and cooling. The simplified management and monitoring, along with the zero impact, easy-to-use virtual copy and refresh/restore capabilities of XtremIO, helped to minimize storage-related administrative costs. See the comparison of storage-related TCO across all five all-flash vendors in Figure 1 below.Figure 1: ESG’s Modeled Five-year Storage TCO for All-Flash VendorsIn addition to offering the lowest storage TCO among the all-flash vendors, ESG also found that XtremIO delivered the largest economic advantage to the modeled software development organization over the five-year period. When compared to most of the other four vendors, the XtremIO solution would be expected to provide more predictable performance and quicker recovery for the production database, resulting in increased customer retention, fewer lost sales, and the ability to benefit from the results of running daily analytics on the same system.Among the all-flash arrays ESG reviewed, XtremIO is the only one to offer truly advanced copy technology with XtremIO Virtual Copies (XVC) and integrated Copy Data Management (iCDM). These greatly simplify and accelerate the job of providing copies of data for test, development, QA, and go-live testing with little to no impact to the production database. ESG found that the abilities to quickly roll-forward and roll-back changes to these copies result in shorter development cycles for new products and quicker patching and enhancements to existing product lines. This faster time to product revenue streams and increased customer retention for existing products help further justify an investment in XtremIO.Figure 2 below shows ESG’s findings of the additional economic impact that the modeled organization would be expected to realize by deploying an XtremIO solution rather than any of the four other vendor’s all-flash solutions over this 5-year modeled timeframe.Figure 2: Additional Economic Benefits Expected from the XtremIO Deployment over Other AFAsCombing the storage TCO benefits and additional business impact, ESG reveals that the modeled organization could expect to realize approximately $1-3 million more over the 5-year period by deploying XtremIO rather than any of the other four all-flash arrays, as shown in Figure 3 below.Figure 3: Expected TCO Savings and Economic Benefits Gained by Deploying XtremIO vs. Other AFAsGet the XtremIO vs. other all-flash arrays TCO study here for the thorough analysis conducted by ESG. Enjoy!
Seven Technology Trends Driving the Future of All-flash Storage in the Enterprise
Last week, the Kaminario team presented at Tech Field Day 10 in San Jose. This was a unique chance to talk to industry experts about our vision for the all-flash space and provide a preview of how our technology will extend into the future.We believe we offer the most flexible, scalable, cost-effective all-flash array platform on the market today. Our fifth-generation, software-defined architecture provides unique advantages for customers and will enable us to take advantage of exciting technology trends that are rapidly coming into focus.We focused on seven technology trends that will transform flash storage and their implications.Slowdown of Moore’s Law. Moore’s Law is no longer delivering 2x yearly performance increases. Standard hardware acceleration will compensate for some, but not all, of the demand for cost-effective performance increases.Implication for AFAs: Scale-out approaches will become even more attractive.Networking advancement. Infiniband EDR, 100Gb Ethernet and Omni-Path are coming and there is a clear line of sight to 400Gb network technologies.Implication for AFAs: Further strengthens the prospects for scale-out and lowers the motivation to keep storage right next to compute.3D NAND (and possibly QLC) remain dominant. NAND density (cost) will continue to improve in the next five years while endurance declines.Implication for AFAs: NAND will remain the leading storage media and will be the commodity curve that replaces HDDs in the datacenter.The rise of non-volatile memory (3DXPoint, ReRAM and others). NVM price is much closer to DRAM than Flash. 0. to 2 microsec performance is much faster than flash but still far from DRAMImplication for AFAs: NVM is too expensive to replace NAND and should be used wisely. It’s not going to replace NAND for five-plus years.NVMe. NVMe is much more efficient than SCSI and will soon replace SATA as direct attach.Implication for AFAs: Lower benefit for AFA since these systems already aggregate the performance of several SSDs. The controller is the bottleneck so decreasing latency from disk to controller is not that interesting.NVMe fabric. NVMe fabric extends the efficiency of local NVMe interface over the fabric.Implication for AFAs: History teaches us to exected slow market adoption for external storage connectivity.NVMe network shelves: Shelves with NVMe SSDs and RDMA network connectivity will be available in the next several months.Implication for AFAs: NVMe network shelves create an interesting opportunity to further improve decoupling of capacity from compute.In my talk, I described how, despite lots of buzz in the industry, these technologies are not yet enterprise-ready. And even when they do reach that level, for the foreseeable future they will be so expensive that they will only fit into niche, high-performance, “tier 0” applications.So, how can we use these new technologies?Kaminario’s agile software architecture opens up some possibilities to use the technologies in unique ways to further our goal of delivering the most flexible, scalable, cost-effective all-flash platform on the market. I talked through a few such forward-looking scenarios.First, there is definitely the opportunity to adopt denser flash faster. Kaminario has endurance optimization IP that would help us adopt new, higher density flash sooner.We also discussed the idea of replacing DRAM with NVM technologies. While it is clear that NVM is too expensive to replace NAND, NVM is more cost efficient than DRAM. This creates a real opportunity to replace DRAM for data and MetaData caching leveraging Kaminario’s highly flexible MetaData management Paradigm.Finally, we discussed the potential to completely decouple storage controllers from capacity leveraging advances in NVMe, NVMe fabric, and NVMe network shelves. This is a direct extension of Kaminario’s scale-up and scale-out architecture available today. This vision would be the ultimate in cost-efficient, agile storage infrastructure. IT organizations would be able to optimize for cost, performance and capacity, maintaining a highly predictable infrastructure as the data center scales indefinitely.I really enjoyed the opportunity to present our vision and get the feedback from the Storage Field Day panel. Make sure you check out full set of videos here.
Why Performance Consistency (and All Flash Virtual SAN) Matters. – Virtual Blocks
If an application is occasionally timing out, or experiencing inconstant performance shouldn’t we consider it “down or unavailable”?  Years ago I walked into a meeting with a customer and as the CIO walked in he asked his staff what their prime directive was. Not skipping a beat they all in unison stated “To provide a reliable and consistent computing environment”. As I worked with this customer I discovered that this was a regular/normal event in meetings.I asked a staff member later over lunch what exactly this mantra meant to them. He described it in the context of his car. While he might want to modify his car to go 200mph, it was more important to have a car that could get him to work on time every day, and not spontaneously slow down or crash on occasion even if on the other 4 days out of the week he would have a 2 minute commute.Hybrid Storage works by the combining the speed of flash with slow but lower cost for capacity magnetic media to deliver a fast cost effective storage. There are a lot of tricks to try to hide the slower disk (Write being Coalesced together, large read cache and read ahead cache algorithms) but fundamentally there will be workloads where read IO must be served from the magnetic disks and this will introduce a certain variability. We call these requests where the read request was not in the cache a “cache miss”. As these misses ad up, the magnetic disks can become a bottleneck. There may be confusion about this in the industry but hybrid storage systems fundamentally can not cheat the laws of physics.The end result is inconsistency. Is some caches a huge number of end user queries on an application may be lighting fast. When data in an untouched region is requested however things can change. When a Doctor pulls up an old patient note and responses go from 1-2 seconds, to 2 minutes there is a noticeable shift in end user experience.  Sometimes this difference in experience is acceptable. Other times it will result in countless calls to the Helpdesk and lost productivity of expensive resources. You can put a rocket on an ordinary passenger car to make it go 200mph but it can only sustain that for a certain length of time.As a former storage admin I am familiar with the endless tricks we employed to try to make magnetic disks perform consistently. We used wide striping and placed data on the outside (faster spinning) part of the disks. We deployed smarter and smarter DRAM and NVRAM caching systems. We used log structured file systems and data structures (and the expense of streaming read performance). We partitioned cache, and adjusted its block size. we used various “nerd knobs” of adjusted data reduction features for specific pools or workloads or caches. Much like trying to make my 4 cylinder mid sized passenger car drive 200MPH, you eventually hit a wall of diminishing returns. Hybrid is not the path forward for business critical applications that need highly consistently latency.How do we transition to seamless, consistent low latency and the amazing end user experiences that come with it?Despite claims to the contrary the only real solution to this problem is to move away from magnetic storage to persistent memory such as flash. All flash systems can deliver amazing low latency for even the most exotic of  workloads like in memory databases.  Previously all flash was reserve for only the most important applications for cost reasons, but now things are changing.  The good news is Virtual SAN’s space efficiency features can make all flash cheaper than competing hybrid solutions. While Bugatti’s have held their value, the price of all flash Virtual SAN has come down quite a bit. If you have not looked at an All Flash Virtual SAN with these new features you may be shocked at how cost effectively you can deliver reliable and consistent infrastructure to more users and applications.John Nicholson is a Senior Technical Marketing Manager in the Storage and Available Business Unit. He can be found on Twitter @Lost_Signal.The picture of the Beetle is from Steve Jurvetson and is licensed under CC BY-SA 2.0The picture of the Bugatti is from Alexandre Prévot and licensed under CC BY-SA 2.0
Gartner’s predictions — a look at the top 10 tech trends
ORLANDO, Fla. — Three of Gartner’s top 10 technology trends envision significant changes — and problems — with data centers.The number of systems managed on premise is on decline, as more work is moved to cloud providers, SaaS vendors and others. But that trend doesn’t mean that an IT manager’s job is getting easier.”IT shops are realizing that as we move more work off-premise, it makes the job more complex,” said David Cappuccio, the Gartner analyst who develops the research firm’s annual list. He presented it Monday at the this year’s Symposium/ITxpo here in Orlando.The “Disappearing Data Center” was the top-ranked technology trend. But another point about data centers, “Stranded Capacity” — listed as No. 6 on the list — is closely related. Gartner, through its user surveys, found that 28% of the physical servers in data centers are “ghost” servers, or what are often called “zombie” servers. These are systems that are in service but not running workloads.Another problem Gartner found in data centers is that 40% of racks are underprovisioned. That means data center managers are wasting space by not utilizing racks, and might be able to shrink the size of their data centers through better management, said Cappuccio. Servers are also operating at 32% of their performance capacity.Another data center-related trend, No. 5 on Gartner’s list, was the idea of Data Center-as-a-Service. Instead of thinking about the “data center” as the center of computing resources, managers are seeing their role as a deliverer of services to the business.Other trends included interconnect fabrics, listed at No. 2, which are increasingly available in multi-tenant data centers. They provide networks that give users access to multiple services, such as the cloud services offered by Google, Amazon and Microsoft, as well as SaaS providers and analytics services. It gives users more flexibility to find the best platform and price, as well as redundancy. The third top trend concerned the use of containers, microservers and application streams. Virtual machines need an operating system, but containers only require what’s needed to run a specific program. Containers can last weeks, days or seconds — “they drive new ways of looking at development,” said Cappuccio.In fourth place is “business-driven IT.” Survey data shows that at least 29% of IT spending is outside the IT department. “Business is not willing to wait for IT,” said Cappuccio.Two of the top 10 trends involved the internet of things (IoT), in particular emerging IoT platforms, which in many cases are incompatible. As for another trend, remote device management — “This could be a major headache,” said Cappuccio.Micro and edge computing environments is next to last as a trend, and involves putting compute resources in places where they are most needed. That may include installing analytical capabilities at distant worksites that can be managed, for the most part, remotely.The final trend, as pegged by Gartner, concerned the skills needed to manage emerging environments, including IoT architect, someone to manage cloud sprawl, and a capacity and resource manager. Gartner