XIOLOGIX DELIVERS STORAGE SOLUTIONS WITH REAL VALUE
Storage and systems choices are made balancing a complex and sometimes even conflicting set of priorities. Partnering with Xiologix will ensure that you have a partner to guide you through the maze of options and make the right choice for the unique needs of your organization.
Storage and systems choices are made balancing a complex and sometimes even conflicting set of priorities. Partnering with Xiologix will ensure that you have a partner to guide you through the maze of options and make the right choice for the unique needs of your organization.
The Top Reasons to Choose Unity Hybrid Storage
Unity™ represents the ultimate in storage simplicity and value in the midrange hybrid storage market by harnessing the power of a modern architecture to combine with radical simplicity and flash technology. Unity’s hybrid flash design comes with built-in data protection, cloud-based management, inline data reduction efficiencies, and all inclusive software to deliver the best combination of hybrid economics with price, density, and TCO.
Unity All-Flash Family Specifications
Dell EMC Unity™ is the only storage system that successfully meets all 4 requirements of today’s IT professionals.
ESG Report: Unity – Right Sized Storage for the Midrange Market
What does the term “midrange” mean today? What about “midsized” or “midmarket”? Definitions and opinions vary. But in the realm of storage hardware, most people would agree that midmarket organizations with midsized IT requirements don’t want stripped-down, allegedly simplified versions of enterprise arrays. Such organizations usually prefer to deploy midrange storage that has been designed from scratch with a midmarket organization’s particular needs in mind. True midrange storage is built for organizations that don’t typically have extensive resources at hand. These organizations tend to employ generalist IT admins who spend their time working hard just to keep the lights on. Such admins can be wary of using formerly large-scale, stripped-down gear; instead, they want right-sized, appropriate storage. It’s important to them because as Figure 1 shows, they deal with big challenges.1 IT staff at these organizations often work under more pressure than teams working at huge, resource-rich corporations. First, they have a wider range of responsibilities. Second, they are hindered because they must rely on IT solutions that superficially appeared to be meant for them, but ended up being just “minimalist” versions of offerings born much larger.
The Top Reasons to Choose Unity All-Flash Storage Wp
Unity™ represents the ultimate in storage simplicity and value in the midrange all-flash storage market by harnessing the power of a modern flash design to combine with radical simplicity. The Unity all-flash array comes with built-in data protection, cloud-based tiering and management, inline data reduction efficiencies, and all inclusive software to deliver the best combination of allflash midrange storage economics of price, density, and TCO.
Dell EMC All-Flash Unity Storage
Fuel IT simplicity with Dell EMC All-Flash Unity storage—delivering the ultimate in simplicity and value so you can speed deployment, streamline management and seamlessly tier storage to the cloud.Learn more : http://dellemc.com/Unity
Automate and Transform with Dell EMC Unity All Flash Storage
http://del.ly/605285CXQ This video will walk you through the Dell EMC Unity midrange storage system. It will highlight the features and benefits as well as cover cloud management capabilities. It is a great way to familiarize yourself with one of Dell EMC’s top storage solutions!
Flash Storage Buyers Guide
A comprehensive guide to evaluating flash storage.
Josh Epstein & Dani Golan, Kaminario – #VMworld – #theCUBE
Josh Epstein, VP of Global Marketing at Kaminario & Dani Golan, Founder & CEO of Kaminario sit down with host Stu Miniman at VMworld 2016, at the Mandalay Bay in Las Vegas, Nevada.
Ten Ways To Reduce Cost While Modernizing Your It Xiologix Version 1 3 Best Of It Transformation
Guidance and direction on opportunities which exist in every data center to reduce operational and capital cost while also helping to extend the data center into the next generation of the modern data center.
Innovations In All Flash Storage Deliver A New Approach To Unstructured Data
Innovations in All-Flash Storage Deliver a New Approach to Unstructured Data
Pure Storage Flasharray//m Launch Presentation
A recap of the Pure Storage Flasharray//m Launch Presentation that was hosted by Information Technology Professionals and Pure Storage in Milwaukee, Wisconsin. Senior Systems Engineer Sean O’Brien will walk you through all the different ways that anyone from large enterprises to small K-12 school districts are benefitting from Pure Storage technology. Gone are the days of having to plan for a 2-week migration cycle. With Pure Storage, you can upgrade an array in minutes and controllers in hours without seeing any decrease in performance in your applications. Pure Storage is bringing innovation, customer-focus, and simplicity to the marketplace. CEO of Information Technology Professionals Paul Hager also offered his thoughts on why they became a Pure Storage Partner. ITP is one of the fastest growing Managed Service Providers in the country.http://www.PureStorage.comhttp://www.ITProsUSA.com
Making flash scale-out storage easier for end users | #VMworld
Rami Katz, VP of Product Management at EMC, sits down with hosts John Furrier and John Walls at VMworld 2016 at the Mandalay Bay in Las Vegas, NV.
Modernize and Simplify with EMC Unity Storage
http://emc.im/6055BUhSBEMC Unity™ simplifies and modernizes today’s datacenter with a powerful combination of enterprise capabilities and cloud-like simplicity for the midrange, in a compact, 2U all-flash design, running the latest Intel processors and latest Linux codebase.The EMC Unity Operating Environment is the most flexible in the industry and can be deployed in many configurations – converged Vblock or VxBlock systems, a dedicated storage array; or as a software-defined virtual storage appliance. Watch the video to find out more.
The Sequel | XtremIO
This blog was written by Chhandomay Mandal, Solutions Marketing Director for EMC All-Flash StorageLast year, the ESG analysts studied the 5-year TCO and business impact of consolidating production, development, test/QA, go-live testing as well as VDI environments of a mid-market software development company generating $54M in annual revenue on to a single XtremIO X-Brick.The analysis contrasted XtremIO benefits against the traditional hybrid storage arrays (dedicated flash and disk-based systems) that the company originally had. The study showed XtremIO offering a 5-year storage TCO savings of $2.8M along with an additional $3.2M in business impact, for a total of $6M advantage.Often times we get the question: This is exceptionally good, but aren’t modern era all-flash arrays designed to solve the same business challenges? Don’t they all offer same advantages?Of course we know that this isn’t true. Architecture matters – the design choices made in architecting an all-flash array significantly affect how efficiently it can solve modern data center challenges. We explain the differences between XtremIO and other leading all-flash arrays from different vendors, but customers like to see a quantified result.To better understand the effects of all-flash design choices translating to business values, we requested ESG to analyze four other market-leading all-flash arrays from different vendors for the same scenario.ESG has now published their findings in a new TCO study where they used the same modeled development organization and did not change any of the model assumptions from the original report when comparing the TCO of XtremIO to all-flash array offerings from each of these other four vendors.Although EMC has launched a complete portfolio of flash solutions with VMAX All Flash, Unity and DSSD in 2016, the new report is focused on only XtremIO that was examined in the previous study.In the new study, ESG referred the other companies as Vendors A through D. Each provided significant TCO and economic business advantages over the traditional storage system that ESG analyzed in 2015. However, ESG found that all of these other all-flash offerings were designed to optimize a few strategic advantages, and did not offer the entire spectrum of benefits that XtremIO did.For example, Vendor A was designed to be simple to administer, simple to deploy, and space-efficient. However, the active/passive scale-up design, traditional data protection, and redirect-on-write snapshot technology meant more hardware had to be deployed to satisfy the requirements, and the process of dealing with copies of data was simplified, but not optimized.Vendor B offered a very economical scale-out design using commodity hardware and advanced QoS, but also had to deploy more hardware, and limited redirect-on-write read-only snapshots meant that workarounds had to be employed to manage copies of data.ESG found that the flash array by Vendor C provided good performance. However, the need to integrate an external storage virtualization solution to provide copy-on-write-based data services greatly reduced its performance capabilities and increased the complexity of the solution.Finally, Vendor D lacked the management capabilities of the other products and could not provide the predictable performance that is required in a modern data center. The lack of copy data management functionality caused ESG to suggest that it would significantly lengthen the time taken to create, manage and roll forward/back copies for test, development, QA and go-live operations.The net result? It was self-evident when ESG compared all five all-flash offerings side by side. As ESG explained, the ability to satisfy the requirements of the organization with less physical hardware generally meant lower costs for acquisition, support and maintenance, and power and cooling. The simplified management and monitoring, along with the zero impact, easy-to-use virtual copy and refresh/restore capabilities of XtremIO, helped to minimize storage-related administrative costs. See the comparison of storage-related TCO across all five all-flash vendors in Figure 1 below.Figure 1: ESG’s Modeled Five-year Storage TCO for All-Flash VendorsIn addition to offering the lowest storage TCO among the all-flash vendors, ESG also found that XtremIO delivered the largest economic advantage to the modeled software development organization over the five-year period. When compared to most of the other four vendors, the XtremIO solution would be expected to provide more predictable performance and quicker recovery for the production database, resulting in increased customer retention, fewer lost sales, and the ability to benefit from the results of running daily analytics on the same system.Among the all-flash arrays ESG reviewed, XtremIO is the only one to offer truly advanced copy technology with XtremIO Virtual Copies (XVC) and integrated Copy Data Management (iCDM). These greatly simplify and accelerate the job of providing copies of data for test, development, QA, and go-live testing with little to no impact to the production database. ESG found that the abilities to quickly roll-forward and roll-back changes to these copies result in shorter development cycles for new products and quicker patching and enhancements to existing product lines. This faster time to product revenue streams and increased customer retention for existing products help further justify an investment in XtremIO.Figure 2 below shows ESG’s findings of the additional economic impact that the modeled organization would be expected to realize by deploying an XtremIO solution rather than any of the four other vendor’s all-flash solutions over this 5-year modeled timeframe.Figure 2: Additional Economic Benefits Expected from the XtremIO Deployment over Other AFAsCombing the storage TCO benefits and additional business impact, ESG reveals that the modeled organization could expect to realize approximately $1-3 million more over the 5-year period by deploying XtremIO rather than any of the other four all-flash arrays, as shown in Figure 3 below.Figure 3: Expected TCO Savings and Economic Benefits Gained by Deploying XtremIO vs. Other AFAsGet the XtremIO vs. other all-flash arrays TCO study here for the thorough analysis conducted by ESG. Enjoy!
Seven Technology Trends Driving the Future of All-flash Storage in the Enterprise
Last week, the Kaminario team presented at Tech Field Day 10 in San Jose. This was a unique chance to talk to industry experts about our vision for the all-flash space and provide a preview of how our technology will extend into the future.We believe we offer the most flexible, scalable, cost-effective all-flash array platform on the market today. Our fifth-generation, software-defined architecture provides unique advantages for customers and will enable us to take advantage of exciting technology trends that are rapidly coming into focus.We focused on seven technology trends that will transform flash storage and their implications.Slowdown of Moore’s Law. Moore’s Law is no longer delivering 2x yearly performance increases. Standard hardware acceleration will compensate for some, but not all, of the demand for cost-effective performance increases.Implication for AFAs: Scale-out approaches will become even more attractive.Networking advancement. Infiniband EDR, 100Gb Ethernet and Omni-Path are coming and there is a clear line of sight to 400Gb network technologies.Implication for AFAs: Further strengthens the prospects for scale-out and lowers the motivation to keep storage right next to compute.3D NAND (and possibly QLC) remain dominant. NAND density (cost) will continue to improve in the next five years while endurance declines.Implication for AFAs: NAND will remain the leading storage media and will be the commodity curve that replaces HDDs in the datacenter.The rise of non-volatile memory (3DXPoint, ReRAM and others). NVM price is much closer to DRAM than Flash. 0. to 2 microsec performance is much faster than flash but still far from DRAMImplication for AFAs: NVM is too expensive to replace NAND and should be used wisely. It’s not going to replace NAND for five-plus years.NVMe. NVMe is much more efficient than SCSI and will soon replace SATA as direct attach.Implication for AFAs: Lower benefit for AFA since these systems already aggregate the performance of several SSDs. The controller is the bottleneck so decreasing latency from disk to controller is not that interesting.NVMe fabric. NVMe fabric extends the efficiency of local NVMe interface over the fabric.Implication for AFAs: History teaches us to exected slow market adoption for external storage connectivity.NVMe network shelves: Shelves with NVMe SSDs and RDMA network connectivity will be available in the next several months.Implication for AFAs: NVMe network shelves create an interesting opportunity to further improve decoupling of capacity from compute.In my talk, I described how, despite lots of buzz in the industry, these technologies are not yet enterprise-ready. And even when they do reach that level, for the foreseeable future they will be so expensive that they will only fit into niche, high-performance, “tier 0” applications.So, how can we use these new technologies?Kaminario’s agile software architecture opens up some possibilities to use the technologies in unique ways to further our goal of delivering the most flexible, scalable, cost-effective all-flash platform on the market. I talked through a few such forward-looking scenarios.First, there is definitely the opportunity to adopt denser flash faster. Kaminario has endurance optimization IP that would help us adopt new, higher density flash sooner.We also discussed the idea of replacing DRAM with NVM technologies. While it is clear that NVM is too expensive to replace NAND, NVM is more cost efficient than DRAM. This creates a real opportunity to replace DRAM for data and MetaData caching leveraging Kaminario’s highly flexible MetaData management Paradigm.Finally, we discussed the potential to completely decouple storage controllers from capacity leveraging advances in NVMe, NVMe fabric, and NVMe network shelves. This is a direct extension of Kaminario’s scale-up and scale-out architecture available today. This vision would be the ultimate in cost-efficient, agile storage infrastructure. IT organizations would be able to optimize for cost, performance and capacity, maintaining a highly predictable infrastructure as the data center scales indefinitely.I really enjoyed the opportunity to present our vision and get the feedback from the Storage Field Day panel. Make sure you check out full set of videos here.
Why Performance Consistency (and All Flash Virtual SAN) Matters. – Virtual Blocks
If an application is occasionally timing out, or experiencing inconstant performance shouldn’t we consider it “down or unavailable”? Years ago I walked into a meeting with a customer and as the CIO walked in he asked his staff what their prime directive was. Not skipping a beat they all in unison stated “To provide a reliable and consistent computing environment”. As I worked with this customer I discovered that this was a regular/normal event in meetings.I asked a staff member later over lunch what exactly this mantra meant to them. He described it in the context of his car. While he might want to modify his car to go 200mph, it was more important to have a car that could get him to work on time every day, and not spontaneously slow down or crash on occasion even if on the other 4 days out of the week he would have a 2 minute commute.Hybrid Storage works by the combining the speed of flash with slow but lower cost for capacity magnetic media to deliver a fast cost effective storage. There are a lot of tricks to try to hide the slower disk (Write being Coalesced together, large read cache and read ahead cache algorithms) but fundamentally there will be workloads where read IO must be served from the magnetic disks and this will introduce a certain variability. We call these requests where the read request was not in the cache a “cache miss”. As these misses ad up, the magnetic disks can become a bottleneck. There may be confusion about this in the industry but hybrid storage systems fundamentally can not cheat the laws of physics.The end result is inconsistency. Is some caches a huge number of end user queries on an application may be lighting fast. When data in an untouched region is requested however things can change. When a Doctor pulls up an old patient note and responses go from 1-2 seconds, to 2 minutes there is a noticeable shift in end user experience. Sometimes this difference in experience is acceptable. Other times it will result in countless calls to the Helpdesk and lost productivity of expensive resources. You can put a rocket on an ordinary passenger car to make it go 200mph but it can only sustain that for a certain length of time.As a former storage admin I am familiar with the endless tricks we employed to try to make magnetic disks perform consistently. We used wide striping and placed data on the outside (faster spinning) part of the disks. We deployed smarter and smarter DRAM and NVRAM caching systems. We used log structured file systems and data structures (and the expense of streaming read performance). We partitioned cache, and adjusted its block size. we used various “nerd knobs” of adjusted data reduction features for specific pools or workloads or caches. Much like trying to make my 4 cylinder mid sized passenger car drive 200MPH, you eventually hit a wall of diminishing returns. Hybrid is not the path forward for business critical applications that need highly consistently latency.How do we transition to seamless, consistent low latency and the amazing end user experiences that come with it?Despite claims to the contrary the only real solution to this problem is to move away from magnetic storage to persistent memory such as flash. All flash systems can deliver amazing low latency for even the most exotic of workloads like in memory databases. Previously all flash was reserve for only the most important applications for cost reasons, but now things are changing. The good news is Virtual SAN’s space efficiency features can make all flash cheaper than competing hybrid solutions. While Bugatti’s have held their value, the price of all flash Virtual SAN has come down quite a bit. If you have not looked at an All Flash Virtual SAN with these new features you may be shocked at how cost effectively you can deliver reliable and consistent infrastructure to more users and applications.John Nicholson is a Senior Technical Marketing Manager in the Storage and Available Business Unit. He can be found on Twitter @Lost_Signal.The picture of the Beetle is from Steve Jurvetson and is licensed under CC BY-SA 2.0The picture of the Bugatti is from Alexandre Prévot and licensed under CC BY-SA 2.0
Gartner’s predictions — a look at the top 10 tech trends
ORLANDO, Fla. — Three of Gartner’s top 10 technology trends envision significant changes — and problems — with data centers.The number of systems managed on premise is on decline, as more work is moved to cloud providers, SaaS vendors and others. But that trend doesn’t mean that an IT manager’s job is getting easier.”IT shops are realizing that as we move more work off-premise, it makes the job more complex,” said David Cappuccio, the Gartner analyst who develops the research firm’s annual list. He presented it Monday at the this year’s Symposium/ITxpo here in Orlando.The “Disappearing Data Center” was the top-ranked technology trend. But another point about data centers, “Stranded Capacity” — listed as No. 6 on the list — is closely related. Gartner, through its user surveys, found that 28% of the physical servers in data centers are “ghost” servers, or what are often called “zombie” servers. These are systems that are in service but not running workloads.Another problem Gartner found in data centers is that 40% of racks are underprovisioned. That means data center managers are wasting space by not utilizing racks, and might be able to shrink the size of their data centers through better management, said Cappuccio. Servers are also operating at 32% of their performance capacity.Another data center-related trend, No. 5 on Gartner’s list, was the idea of Data Center-as-a-Service. Instead of thinking about the “data center” as the center of computing resources, managers are seeing their role as a deliverer of services to the business.Other trends included interconnect fabrics, listed at No. 2, which are increasingly available in multi-tenant data centers. They provide networks that give users access to multiple services, such as the cloud services offered by Google, Amazon and Microsoft, as well as SaaS providers and analytics services. It gives users more flexibility to find the best platform and price, as well as redundancy. The third top trend concerned the use of containers, microservers and application streams. Virtual machines need an operating system, but containers only require what’s needed to run a specific program. Containers can last weeks, days or seconds — “they drive new ways of looking at development,” said Cappuccio.In fourth place is “business-driven IT.” Survey data shows that at least 29% of IT spending is outside the IT department. “Business is not willing to wait for IT,” said Cappuccio.Two of the top 10 trends involved the internet of things (IoT), in particular emerging IoT platforms, which in many cases are incompatible. As for another trend, remote device management — “This could be a major headache,” said Cappuccio.Micro and edge computing environments is next to last as a trend, and involves putting compute resources in places where they are most needed. That may include installing analytical capabilities at distant worksites that can be managed, for the most part, remotely.The final trend, as pegged by Gartner, concerned the skills needed to manage emerging environments, including IoT architect, someone to manage cloud sprawl, and a capacity and resource manager. Gartner
Why you need to care about NVMe over Fabrics – now
Part 1 of a 2-part Series – You can find the second part of the series, here.Anyone paying even the slightest bit of attention at the recent Flash Memory Summit would come away with three compelling observations regarding the future of storage. Importantly, these observations have a significant and profound impact on critical infrastructure decisions that IT leaders are making today. And if made wrong, the implications could be devastating for the organization.#1 – Flash is the Technology of Choice for Storage Flash is very fast already, and getting much faster with the release of 3D NAND flash technology. The growth of AFA (all flash arrays) in to the data center is still embryonic, yet the industry’s expectation is in the long-term flash will ultimately eliminate spinning hard disk drives for most applications except for warm data repositories (near line storage). This means only 7200 RPM HDDs survive, whereas the performance-oriented market for 10K and 15K RPM HDD products will succumb to flash storage. This essentially removes HDDs from the performance tier in the storage device hierarchy.Along with well-publicized performance and power savings, Flash vendors are also increasing density very rapidly. Toshiba announced 3D NAND technologies that in 2017 will allow a single 1TB chip the size of a US penny (see image), while Seagate released a 60TB SSD in a 3.5-inch form factor and boosts of 1PB capacity arrays using only 17 SSDs. Flash is more than raw performance; this technology is being deployed in high capacity storage arrays supporting low-latency applications in a shared data environment. As these devices become more mainstream, they will become an integral component in traditional storage area networks. #2 – NVMe is the Protocol for FlashOne of the industry luminaries at the Flash Memory Summit made an interesting observation. He commented that the “SSD guys got it right from the beginning”. When SSD vendors first released flash-based products, they cleverly supported the existing HDD environment. This meant a 2.5-inch form factor SSD with 100% plug-compatible SAS or SATA interfaces. No changes required, a perfect replacement for 2.5-inch HDDs where essentially the only “user change” was faster access to your data and improved application response times. Recognizing that most data centers are one of the most risk-adverse places in the planet, this evolutionary approach was the ideal way to launch SSD products.But today, the fears and risks of emerging flash technologies have all but disappeared and SSDs have established a solid market foundation on their own. Unfortunately, the legacy SAS and SATA interfaces that relied on the aging SCSI protocol stack … good for hard disk drives with many heads and spinning platters … was orders of magnitude too slow for today’s ultra-fast solid-state memories (see graph). A new protocol – optimized for flash – would bring forward the full capabilities of this technology. This protocol is called NVMe (Non-Volatile Memory express). NVMe replaces the SCSI protocol and can reduce latency by a few orders of magnitude. Today, NVMe is fully adopted by the flash industry and will definitely dominate flash over the coming years. As SSDs overtake the storage market, so will NVMe as the protocol of choice for flash. This progression may take years, but will happen. The benefits of NVMe are simply too compelling.#3 – The Storage Network is the new BottleneckAt this year’s Summit, almost every presentation that amplified the latest performance benefits, also sent a message of “insufficient storage network bandwidth” to keep up with these developments. With wide scale technology advancements in server-side compute and solid-state storage, the infrastructure bottleneck has now shifted to legacy networks. No longer can slower Fibre Channel or Ethernet transports keep up with the new performance within the data center. Presentations at the Summit explained how a seemingly small number of NVMe-based SSDs can saturate many legacy networks. This graphic shows how just four NVMe drives saturate a 100GbE link.To further this point, in a recent Gartner report on The Future of Storage Protocols (G00307902), the research firm said that “Storage performance bottlenecks are moving out of arrays and into the storage networks” and that “Future protocols (such as 40GbE used for iSCSI), file-based protocols (such as NFS and SMB) and current block protocols (such as 16Gbs Fibre Channel) will be too slow for the next generation of solid-state arrays and hybrid arrays.” Their summary recommendation is that “storage networking investments are becoming a critical top priority” and that “IT leaders, therefore, must revisit and review their budget plans for storage networking infrastructure and ensure their ability to meet increasingly performance-sensitive service levels.” Read the full report
Servers and storage become inextricably linked
I thought I was going to learn all about servers, but I got a lesson in storage instead. I attended the recent Open Server Summit conference down in Silicon Valley, figuring that it was a great opportunity to become a little smarter about server architectures and designs. There was no shortage of new stuff for me to try to cram into my personal nonvolatile memory (read: brain), but I was surprised that so many of the new developments in server techs were related to storage. In fact, if I closed my eyes and imagined that I was at the Open Storage Summit instead, the presentations I heard on servers and storage would’ve made just as much sense. At first, I thought it was a little odd that so much of the talk at a server conference was about storage, but the reasons sunk in pretty quickly: convergence and solid-state. Because convergence is so tightly linked to the abstraction of controlling software from physical devices such as servers and storage, it’s ironic that that decoupling actually puts greater focus on the hardware in a number of ways. Pretty much all of the elements that make convergence work — automation, agility, low latency and so on — require some pretty sophisticated hardware underneath that convergent upper layer. Commodity hardware has value In the software-defined everything world, the hardware — regardless of whether it’s storage, server or networking gear — is often referred to as “commodity” stuff. Webster’s online dictionary defines commodity as: 1) “something that is bought and sold” and 2) “something or someone that is useful or valued.” If I closed my eyes and imagined that I was at the Open Storage Summit instead, the presentations I heard on servers and storage would’ve made just as much sense. Everything gets bought and sold, so that part of the definition doesn’t shed any new light on convergence technologies, but the second one is spot on. The way “commodity” gets tossed around in convergence conversations, you might think that the label meant just the opposite — something undistinguished and pretty unimportant. I understand that some champions of converged architectures feel a need to emphasize — maybe over-emphasize — the importance of the new, more powerful software layer. So, maybe, by trivializing the hardware, they think the software will stand out even more. Personally, I think that’s a misguided and potentially misleading approach. Not relying on proprietary hardware doesn’t mean that you don’t need a sophisticated, reliable, high-performance, scalable (and so on) assemblage of hardware products to bolster the software. You could have the greatest software in the world, but if it’s running on a creaky kit, it won’t seem all that great. Look, all IT hardware has always been software-defined; the latest wave is just another step in reducing the need for proprietary hardware tweaks. Another reason why there was so much talk about storage — and networking for that matter — at a server conference is that, as we rely more on the software than on hardware hacks, it makes it easier to bring servers and storage and networks closer and closer together. And in the world of IT, close is good. It’s getting harder and harder to talk about one of these data center pillars without also bringing the other two into the conversation. Think hyper-converged infrastructure. Flash storage crucial to convergence Solid-state storage hasn’t just accelerated storage systems, it has also enabled a variety of architectures involving servers and storage. But there is a “pure” storage technical development that — in my not-so-humble opinion — has been one of the key motivators for the software-defined data center movement: NAND flash. Solid-state storage hasn’t just accelerated storage systems; it has also enabled a variety of architectures involving servers and storage that provide the flexibility and agility to leverage software to meet the requirements of a slew of different use cases. Flash, along with multicore processors, makes it possible to build converged systems that can offer improved performance without the need to cobble together special hardware devices. Flash can take up the performance slack of general-use hardware, which makes using those “commodity” parts feasible. So what’s really happening in the software-defined realm is that the “commodity” hardware is getting more and more sophisticated and efficient, and is therefore able to do what other proprietary parts used to do. That means “proprietary” is getting shoved down stack to the component level. Once again, the beauty of software-defined is being bolstered by hardware. About the author: Rich Castagna is TechTarget’s VP of Editorial.
What is software defined storage?
Would a 35% technology growth rate get your attention? That’s the global software defined storage market forecast for 2015-2019 according to IDC. What is driving this growth, and what is the value?TechTarget defines software defined storage as an “evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of the underlying hardware.” SDS provides flexibility to deploy storage management when, where and how it is needed in a cost-effective way that removes complexity.“We are currently in a storage state where there are many different types of applications and workloads driving multiple storage demands,” said Adam Catbagan, Arrow Senior Systems Engineering Manager. “The result is silos of storage, complex administration and increased operational costs. The storage industry is collectively looking at how to do more with less; and software defined storage is one way to do it.”Software defined storage separates storage hardware from the software that manages the storage infrastructure. This type of approach results in a reduction of expensive infrastructure, eliminates storage silos and helps align to IT organizations structure, roles and responsibilities. Organizations can leverage new design techniques for their own application and use multiple storage hardware solutions without worrying about interoperability and under- or over-utilization of servers.Catbagan says there are two main enablers of SDS:Consumer-driven expectations: Today, we live in a world where consumer applications can deliver results almost instantaneously. Think of how fast you can hear a new song and have it downloaded to your mobile phone or other device. This sets the stage to make business expectations be equally as responsive. For example, when Finance initiates a query, they don’t want to wait minutes for the answer. They want the answer in seconds or sooner as is the speed of business. Business needs: Another driver is the push for IT to provide value quickly back to the business. People don’t want to wait around for days or weeks for an application to be up and running when they need the information right away. The deployment should be simple — like going to a portal, choosing the right service level, and having the infrastructure available in minutes rather than days or weeks.Arrow partners need to start embracing SDS and really begin understanding its value. What we suggest is that you explore your end-customers’ technology journey and propose plans that will help them solve their challenges.Arrow supports our partners with its line card of top suppliers who have developed dynamic SDS strategies. You now have a wide array of suppliers to choose from and can select the strategy that best fits your customers’ needs. In addition, you can always rely on Arrow to help you along the way with our team of experienced engineers, a robust Value-Add Center and a top-notch Solutions Lab.If you would like additional information about software defined storage and how to make it work for your customers, please contact your Arrow representative.Editor’s Note: This post was originally published in November 2015 and has been updated for accuracy and comprehensiveness.
Pure Storage CEO assesses NVMe, flash storage market
SAN FRANCISCO — All-flash storage pioneer Pure Storage holds its Accelerate user conference this week to shine a spotlight on the flash storage market it helped create. Although Pure isn’t expected to introduce new hardware, it will showcase software upgrades designed to diversify the use cases for its flagship FlashArray and high-performance FlashBlade, which is made with proprietary nonvolatile memory express (NVMe) flash modules. Analyst firm IDC ranked Pure Storage fourth in 2016 revenue among all-flash array vendors, trailing Dell EMC, NetApp and Hewlett Packard Enterprise. SearchStorage sat down with Pure CEO Scott Dietzen ahead of Pure Accelerate 2017 for his take on where the flash storage market is headed and his company’s quest to crack the $1 billion revenue mark in 2018. Pure Storage recently launched FlashBlade arrays that integrate custom PCI Express-based NVMe flash modules. There is speculation that NVMe flash gradually could supplant traditional SATA and SAS SSDs. What’s your view on the evolutionary cycle of NVMe flash? Scott Dietzen: We believe 80% of NVMe storage is based in software. You need to design algorithms and data structures to support the massive parallelism that’s afforded by NVMe. Using just one of the 64,000 parallel queues opens up a very wide aperture to pack highly dense flash in small space. Scott Dietzen With NVMe, you’re extending the storage device across the network all the way to the compute. With a very fast network and NVMe over Fabrics, you have the case of storage on shared storage devices moving about as far away from the CPU as storage on a local chassis. It blows up the notion of what it means to have compute and storage inside the same device. Is talk of an all-flash data center all hype, or do you see a realistic time frame for it going mainstream? Dietzen: We have some customers that are already [headed to all-flash data centers], including some notable large companies. Facebook is building data centers where the only mechanical devices are fans for cooling. Our view is that flash will continue to move broadly across the data center and, over time, compete on total cost of ownership with disk and even tape. The big push in the flash storage market is to predictive analytics and deep-learning-type applications. Data-driven apps need much higher performance density than you get from electromechanical media. They are built to run on solid state. Pure has integrated third-party secondary storage and backup providers in FlashArray. Is flash-based backup no longer a crazy idea? Dietzen: We are certainly starting to see that, although it’s still in the early days. We have customers that use FlashBlade as a backup and archiving target, for several reasons. One, they want to be able to restore in place and actually run the data set where it lives to get reasonable performance. You can’t restore in place with electromechanical disk, unless you can accept a massively degraded performance. One other use case is that data sets are changing so rapidly that disk-based backup can’t keep up. The deltas are coming too quickly for anything other than flash to handle the change rates. A third driver is that flash prices are falling, approaching the price of slow disk. Once you factor in data reduction, density, and power and cooling, we believe flash will rival slow disk over the next couple of years in terms of total cost of ownership. What are the typical workloads customers place on Pure Storage FlashArray and FlashBlade storage? Dietzen: For the most part, structured data workloads are targeted for FlashArray. That includes various software stacks — Oracle, SAP HANA, Microsoft SQL Server and a lot of VMware workloads. Open source structured databases like My SQL and Postgres [also] will most often fit on FlashArray. We target FlashBlade for unstructured and semi-structured bigger data sets. That can be things like software development, gaming, moviemaking, video automation, video capture. You claim high repurchase rates for FlashArray. What are you projecting for FlashBlade? Dietzen: We have enjoyed phenomenal success with FlashArray repurchase rates. The average Pure customer will triple their purchase of Pure Storage within two years of buying their initial footprint. That is largely a FlashArray metric, but we expect to see repurchase rates at least [equal] for FlashBlade. Across our top cohorts, like cloud providers and Fortune 500 companies, it’s more like $12 for every $1 during the first 18 months. We have some customers that are already [headed to all-flash data centers], including notable large companies. Facebook is building data centers where the only mechanical devices are fans for cooling. Scott DietzenCEO, Pure Storage What impact is the Dell-EMC merger — and, to a lesser extent, the Hewlett Packard Enterprise-Nimble Storage merger — exerting on the flash storage market? Has Pure Storage peeled off any of those vendors’ all-flash customers? Dietzen: There is no question that we benefit somewhat from organizational confusion. The combined Dell EMC has seven different all-flash storage offerings. Customers aren’t always clear which one to use for which applications. They aren’t sure which platforms will live on and which might be mothballed. In the case of Dell EMC, we’ve also been able to capitalize on a changing relationship with Cisco. Cisco and EMC had a partnership prior to the Dell acquisition, but Cisco doesn’t want non-Cisco gear going into EMC storage refreshes. As an EMC competitor, I think that [shakeout] has brought us a bunch of new business. Has going public helped Pure Storage grow in the flash storage market? Dietzen: A key reason we went public is that most of our customers are public companies. It put us in a position to showcase our growth, with full accounting scrutiny. We can transparently show customers that we have more than a half-billion dollars in the bank. We will sustain positive cash flow later this year. [Going public] removes a significant amount of risk for customers around our business. We’re not a startup anymore. Do you expect to hit your $1 billion revenue goal for this year? Dietzen: We’ve guided Wall Street to a bracketed range from $975 million to $1.025 billion. But anything short of $1 billion will be deemed not to be successful.
Speeds of storage networking technologies rise as flash use spikes
The bandwidth of data storage networking technologies continues to spike, as ever-faster solid-state drives and flash storage systems rise in popularity.Enterprises can expect higher speed options almost across the networking spectrum over the next several years, offering an edge to IT shops with especially demanding high-performance workloads. But not every organization has such needs, and the speedier data storage networking technologies may be progressing at a more rapid rate than the demand for them in those cases.Despite predictions that Fibre Channel (FC) is on its way out, market research shows that the SAN data storage networking technology remains a top choice for mission-critical applications. FC vendors say flash storage is spurring the adoption of the latest 16 Gbps and 32 Gbps FC storage networking technologies.Ethernet, an increasingly popular storage networking option, is developing at a faster pace than alternative technologies. Ethernet products supporting speeds as high as 400 Gbps — through 16 lanes of 25 Gbps, eight lanes of 50 Gbps or even four lanes of 100 Gbps — could start to roll out in 2018, pending the expected ratification of the IEEE standard in 2017.InfiniBand, a favored option in high-performance computing environments, will also see a significant speed boost, with 200 Gbps on the way in 2017.Serial-attached SCSI (SAS), most often used to transfer data between host computers and hard disk drives (HDDs) or solid-state drives (SSDs), is on the verge of doubling its speed to 24 Gbps by 2019.Emerging non-volatile memory express technologies can provide a performance boost and latency reduction with SSDs using PCI Express (PCIe) buses, and NVMe over Fabrics can extend the benefits across a network.In smaller configurations, protocols such as Thunderbolt and USB are on track for higher speeds as well.The following slides will show how the data storage networking technologies have evolved and what developments are on the horizon in the coming years. View All Photo Stories