5 storage features every organization should expect

Sponsored by Dell and Intel®.

Today, the ability to handle critical workloads like online transaction processing, data warehousing and virtual desktops is a base requirement for nearly every business–but too often the storage technologies that optimize these applications are limited to the largest installations. It’s time for organizations of all sizes to expect more from their storage array. Here are five things you should have on your storage requirements list.
Read More

1. Flash optimization

Recent innovations have put flash performance within reach for most organizations. It’s now realistic to put 100% of your hot data on flash–not just certain volumes. Look for storage arrays that mix both write- and read-optimized SSDs with flash-optimized tiering to deliver flash performance at costs comparable to traditional disk.

2. Write layer performance advantage

Many storage arrays prioritize read data, but you should search for storage that also directs all writes to the fastest drives. That way, data writes are never bogged down on slow storage, even if they belong to an infrequently-accessed volume. As a result, your applications will run faster and your users will stay happier.

3. Auto-tiering efficiency

Hands-free intelligence, informed by real-time usage metadata, should help your storage find the sweet spot for both performance and cost savings. Seek a storage array that automatically moves data to more cost-effective drives as it becomes less active. You’ll buy fewer drives overall–and purchase a less expensive mix of drives.

4. Automated RAID provisioning

Some enterprise arrays offer thin capacity provisioning, but shortlist the few that extend the concept to RAID provisioning. By eliminating the need to dedicate or pre-allocate RAID disk groups, you’ll spend less time managing your arrays–and won’t be locked into underutilized disk groups.

5. Full-scale enterprise features

Enterprise-class workloads demand enterprise-class storage features like unified file and block storage, snapshots, synchronous and asynchronous replication, hard disk optimization, third-party integration and more. Look for an array that includes these features right from the start and scales them alongside your business.

VIDEO: Redefine the Economics of Enterprise Storage


Boosting Storage Performance The business-critical applications that keep enterprises humming place ever-increasing I/O demands on storage. The Dell™ Compellent™ Storage Center™ platform powered by Intel® optimizes I/O-intensive workloads in virtual desktop, messaging and database environments. Register Now



Scale-out and scale-up flash storage: optimizing business-critical workloads

The proliferation of mobile applications, social networking, business applications and analytics — combined with exponential data growth — is driving the need for faster processing with high I/O performance and reduced response times. For all of its benefits, a storage solution that applies a common disk medium across multiple, diverse applications cannot deliver appropriate response times, sometimes sacrificing performance for economy.

Read More
While flash storage is the leading technology for turbocharging I/O-intensive applications by delivering low latencies and ultra-fast response times, it’s less cost-efficient for cold data.

Many organizations struggle to successfully implement business-critical workloads due to complexities in workload performance and scalability as projects go from pilot to production. The storage that supports the workloads is a critical factor in determining the success of these projects. For this reason, a growing number of organizations have adopted flexible, scale-out or scale-up storage solutions with advanced software features that offer automated tiering and management simplicity along with high availability and reliability. Scale-out storage solutions typically add storage disks, controllers, and network access ports on a peer-to-peer basis increasing data pools in incremental units as demand increases. Scale-up storage solutions are typically frame-based where additional disk shelves are added to the storage pool behind the controllers. Both of these environments can benefit from adding flash or hybrid storage, provided the storage architecture permits disks to be added that can be integrated in an automated tiering solution.

Automated tiering is a best practices approach that directs application workloads to the most appropriate storage media — solid-state drives (SSD) and various spinning disks (HDDs) — with the most suitable performance and cost characteristics. Business-critical workloads, such as online transaction processing (OLTP), data warehousing and virtual desktop infrastructures (VDI), reap the benefits of flash storage, delivering impressive results in operations such as online queries, batch processing, retail transactions, business analytics, and peak-time VDI logins. Other workloads may be better supported on high-capacity and cost-efficient storage, but easily managed within the same storage pool with a flexible, scalable architecture. With limited budgets and resources, solutions have to be seamlessly scalable and easily managed while offering high availability and data protection features to meet constantly changing business demands.

Flash storage optimization of business-critical workloads

With increasing data growth and pressure on data centers to efficiently handle diverse types of application workloads, a traditional “one size fits all” strategy for storage design no longer works. Understanding how flash technologies can meet your workload requirements can offer insight for designing a scale-out or scale-up, workload-driven storage architecture.

Flash SSD storage offers a compact, high-performance option. Because it contains no moving parts, flash storage is not subject to the mechanical limitations of HDDs, meaning that it can deliver outstanding random I/O performance and ultra low latency. Flash-based arrays also enable many more I/Os per second (IOPS) to be packed into a smaller footprint than comparable HDD-based systems while using less power. Flash is also the single, easiest way to provide dramatic performance gains for applications that need new levels of real-time responsiveness

With the appearance of innovative flash solutions designed to deliver the speed of flash at the capacity and price of rotating disk, you can select flexible storage options that accommodate diverse workloads, while keeping budget and performance requirements in mind. All-flash and hybrid-flash configurations with automated tiering offer a choice of SSD and HDD types, addressing diverse enterprise workloads and price points

Business-critical workload requirements

To support business-critical workloads, your storage solution must disrupt traditional storage boundaries and deliver advances in intelligent tiering and the unique use of SSDs in all flash and hybrid solutions

Organizations rely heavily on business-critical, IOPS-hungry workloads to run key operations and stay profitable, while budgets and resources call for a cost-efficient, self-managing storage solution. Your storage solution can meet I/O workload demands and performance by:

  • Efficiently spanning both hot and cold data, optimizing for performance and value based on specific workload requirements
  • Monitoring I/O workloads to match the right media with the right workload
  • Automatically and continually migrating data at a sub-volume level to arrays with the most appropriate performance profile
Certain applications, such as OLTP with Oracle/SQL databases and big data analytics, require high performance with low latency and are ideal for all-flash SSD arrays. However, these applications frequently contain data that is cold and not accessed frequently, if ever. With automated tiering, the database volume can span both all-flash and spinning disk systems in a common pool by placing hot data on SSDs and cold data on HDDs. This is an enterprise-class feature that customers need to run an efficient and resilient data center. Adding an all-flash array to a scalable architecture with advanced software features like automated tiering can help you easily manage your critical workloads in the same storage pool as your less accessed data, while easily addressing performance and capacity when needed.

Hybrid array systems that contain both SSDs and HDDs in the same array can add tremendous value to focused multitiered workloads such as VDI that can contain both static and highly dynamic components. The advanced automated tiering features on hybrid arrays enable them to categorize workloads as hot (high I/O), warm (medium I/O), or cool (low I/O), and place them accordingly on either SSDs or conventional hard-drive tiers. The unique hardware and software design innovations of the hybrid storage arrays perform workload optimization, even on volumes containing a mix of workload types and storage needs within each array.

To effectively deploy business-critical workloads, the storage infrastructure must scale simply and seamlessly. Traditional storage technologies are complex and costly to scale, and often introduce business disruption in the process. This can seriously affect the productivity of an entire organization.

As an environment grows, the storage infrastructure must scale seamlessly to support this growth. Typical scale-up storage solutions use frame-based architectures. Some of these frame-based solutions can scale up by adding flash disks to the overall storage pool, as long as the software can accept the new flash storage into a common pool and can benefit from an automated tiering architecture to spread volumes across both SSD and HDD resources.

In scale-out storage architecture, you can run a mix of applications — each with its own particular set of performance requirements and environmental considerations — in the course of business operations. To support this application mix, you can configure a heterogeneous storage pool with multiple arrays

To expand capacity or performance, you can simply add an array to the SAN, and the solution automatically redistributes workloads across arrays to best suit the application mix. The new array adds processor and throughput resources, in addition to disk capacity and spindles, all of which can improve overall storage performance. By scaling out storage when required, you can reduce life-cycle costs by enabling expansion when necessary with affordable storage arrays.

A successful storage solution also provides a management infrastructure that remains available and reliable as the workload environment grows and changes. In large environments, managing storage assets and coordinating them with hosts can be challenging. A storage solution that integrates management and data protection processes is the key to keeping workload management simple.

Today’s data centers require simplified management of storage resources, comprehensive data protection of virtual resources, load balancing and performance optimization of storage resources, and seamless integration with critical applications and operating systems to optimize critical workloads. To help ensure high availability and reliability, you can implement a scalable storage solution that offers an intelligent, automated management framework, and a comprehensive set of enterprise data services with a fault-tolerant hardware architecture that can support many major operating systems and applications. Choose a system that also offers historical monitoring across virtualized SAN groups, consolidates performance and event statistics both on a near real-time and trended basis, and provides the ability to export the data collected, including capacity, IOPS and networking statistics to help ensure optimal performance.

From your most critical workloads to your cold data, a scaleout or scale-up storage solution — one that can automatically tier volumes or data to the most appropriate arrays or media (flash SSDs or HDDs) and offers advanced software features to help ensure availability and reliability — can help you efficiently manage your data center.

Uncovering the hidden costs of doing nothing: Backup and recovery

Sponsored by Dell and Intel®.

Calculating the potential return on a new server or application is relatively straightforward: If the purchase enables an organization to do more in less time, profits are likely to go up. That makes a convincing case. But in the absence of obvious benefits or immediate gains, the computation for backup and recovery is not as easy.

Read More
Quantifying the return on an investment in backup and recovery involves estimating the financial impact of worst-case future scenarios: effectively, thinking the unthinkable and figuring out the potential cost of doing nothing about it. As a result, a backup and recovery solution often is justified only as part of a project — such as an application migration or a data center upgrade — with clearly definable financial benefits. However, over time, that approach can lead to a mix of legacy platforms and application-specific solutions, each with its own dependencies, backup windows, recovery times, support requirements and so on.

To help organizations evaluate the cost-benefits of a modern, integrated backup and recovery solution, IT research organization Computing Research surveyed 120 data center professionals in the United Kingdom.1 The online survey was designed to find out how those charged with keeping enterprise IT systems operational quantify risk and, more importantly, how they use that information to justify spending on backup and recovery.

No one expects a hedgehog in the air conditioning

Ask for a list of potential disasters that might affect enterprise IT systems, and flood and fire spring to mind. Disasters, however, don’t just follow in the wake of cataclysmic events such as earthquakes and typhoons. Software bugs, malware and hackers can be just as damaging.

But quite surprising are the seemingly inconsequential or unforeseeable actions that nonetheless can have a big impact on data loss or downtime. Take, for example, a few anecdotes related by survey respondents:

  • A contractor accidentally leaned on the kill switch, turning off an entire data center in moments.
  • An operator became sick over a rack of switches as a result of finding a decomposing and very smelly hedgehog in the air conditioning.
  • One company prudently invested in a dedicated business resiliency center, but when a nearby electricity substation exploded, the center turned out to be inside the area cordoned off by the police, preventing its use.
  • When trying to recover a crucial storage array, another company found that although tapes had been loaded nightly into the backup library as scheduled, the backup routine itself had been discontinued.
Nearly all respondents had an interesting anecdote to tell, indicating that they were mindful of potential threats to IT systems. Most respondents also were aware of the inadequacies of legacy backup and recovery systems to protect against such threats, leading 43 percent to adopt integrated systems that are designed to protect their platforms and applications across both physical and virtualized infrastructures.

That’s good news, but on the flip side it reveals more than half of survey respondents are less well prepared; many report struggling with legacy products that are poorly equipped or unable to handle the latest technologies. In this respect, just under one-fifth said they relied on recovering whole servers rather than bringing individual applications back online should problems arise. This approach could be one reason why complex and slow recovery procedures rose to the top of the list when respondents who use a mix of legacy products were asked to highlight their backup and recovery issues.

Slow recovery times: Bane or boon?

When asked specifically about how quickly they thought they could recover from a disaster, respondents supplied answers that belied the general impression of their organizations competing in fast-moving, always-on, global marketplaces in which every second counts.

One might expect most enterprises to be able to recover lost data and even entire servers in seconds or at least minutes. However, that expectation is widely optimistic for the many respondents who thought it could take days, at worst, to recover lost data or a full server (see figure).

20140397-benaroch-02

Given the breadth and consistency of the answers, slow recovery times appear to be the bane of IT departments across organizations of all types and sizes. But consider this: Those very same lengthy recovery estimates can be used to quantify the financial impact of not investing in solutions to address the issue.

The longer it takes to recover a server or application, the more business may be lost. The cost of that lost business can, at the very least, be estimated and used when bidding for the purchase of new backup and recovery solutions or when updating an existing setup to cope with technological and business changes.

In the pressure cooker

One reason why IT decision makers may not have used recovery times to indicate lost business costs could be the sheer volume of data, applications and host systems that must be protected these days. The survey asked respondents to rate a number of pressures they faced from changes in technology, business practices and user expectations. Data growth and the need to protect large amounts of information came in as the highest priority (see figure).

20140397-benaroch-03

Given these complex challenges, IT decision makers may find it hard to justify a comprehensive approach to backup and recovery that is designed to protect all IT resources within their organization. Such a business resiliency solution may still be viewed as prohibitively expensive, even when measured against the cost of losing business for days.

A class act to help direct investments

What more can be done? When assessing the impact on business resiliency planning, many enterprises simply don’t see initiatives such as bring your own device (BYOD) or even cloud computing as major concerns. Rather, they must cope with the same issues of storage growth, compliance and security that have plagued them for decades.

So instead of trying to protect everything in the same way, why not identify the resources — such as servers, applications and data — that would cause the most operational harm should they become unavailable for any length of time? The simple truth is that not all IT systems or data are of equal importance to the enterprise. Mission-critical systems must be recovered in seconds to keep the business running, whereas servers in occasional use could take much longer to recover without affecting operations.

By classifying resources by importance, IT managers can concentrate backup and recovery investments to make sure priority resources are protected. In addition, they can build adequate redundancy into host platforms, speed up network connections and increase wide area network (WAN) bandwidth. These actions help keep business-critical systems running and improve the ability to replicate backed-up data as added insurance against disaster. And they can choose backup and recovery products expressly designed to handle the mix of platforms and applications deployed — helping ensure that, should the worst happen, business-critical resources can be recovered quickly with minimal impact to the bottom line.

Although classification of data by importance to the business seems to be a compelling best practice, only 68 percent of survey respondents indicated doing so. Those who did, however, ranked quicker recovery as the main advantage of this considered approach to business resiliency planning, ahead of reduced storage overhead when taking backups and reduced backup costs.

Match backup and recovery to business priorities

Because recovery requirements should be tied to the value of enterprise systems and data, Dell offers a comprehensive portfolio of field-tested solutions that help organizations back up and recover resources based on specific protection requirements. Organizations of all sizes can build optimized data protection environments quickly and easily, without changing their overall data protection strategy, by deploying products such as Dell AppAssure backup, replication and recovery software; Dell NetVault Backup cross-platform backup and recovery software; and the Dell DR and Dell DL families of purpose-built backup and recovery appliances. This product portfolio allows IT decision makers to match the software and hardware systems within a business resiliency strategy to application- or system-specific recovery time objectives and recovery point objectives. The result? Assurance that the portfolio enables critical data to be restored in seconds and IT infrastructure in minutes.

Adopting an integrated approach enables IT decision makers to align backup and recovery solutions with strategic business priorities — while also helping to cut costs, reduce complexity and minimize data loss. “When it comes to protecting your business, data recovery is critical to resiliency,” says Srinidhi Varadarajan of Dell. “Rather than treating each system/application backup process separately, think about their relative impact on the business when evaluating what set of tools to use and what to change. Dell’s data protection portfolio has been engineered from the ground up to give you a range of options that accommodate your schedules, budgets and current practices.”

Expecting the unexpected

Equally important to the classification of resources is the need to understand the requirements and limitations of the technologies on which enterprises increasingly rely. For example, virtualization brings many benefits when it comes to hardware consolidation, speed of deployment and operational flexibility. At the same time, it can heighten system vulnerability by enabling multiple virtual machines to be hosted on a single platform.

Backup and recovery solutions should address the needs of virtual as well as physical resources. They also must cope with legacy systems as well as new platforms, be easy to manage and meet the recovery expectations of the organization, its employees and customers. (For more information, see the sidebar, “Match backup and recovery to business priorities.”)

With the right planning for suitable backup and recovery solutions to protect their most important systems, organizations can effectively cope with IT disasters — be they cataclysmic natural events or apparently inconsequential actions that would otherwise take a big toll.


Driving Business Value with Flash-Optimized Dell Storage Solutions Getting the best from a virtualized environment is a vital concern for IT managers struggling to deal with shrinking budgets, fewer staff, and other pressures brought about by the tough financial climate. Register Now


Introducing DPACK

In an effort to help guide our customers through mission critical IT decisions, Dell’s team of solution experts have developed an innovative new tool - the Dell Performance Analysis Collection Kit or DPACK, powered by Intel®. Through a simply-run program, DPACK produces an output that gives you the confidence and knowledge you need to make the right decisions for your business. This complimentary tool will help you make the most impactful IT solution recommendations, whether it’s reducing wasteful spending or analyzing opportunities for virtualization or datacenter expansion. With DPACK, you can get a true sense of your current IT environment and identify areas for further optimization.

Read More

Understand your environment

DPACK runs remotely and agentless to gather core requirements such as disk IO, throughput, capacity and memory utilization and produces an in-depth view of server workload and capacity requirements. DPACK has the ability to generate two kinds of reports: an aggregation of resource needs across disparate servers with a simulation of those workloads if consolidated to shared resources and an indepth individual server report to be used by IT administrators to search for potential bottlenecks or hotspots that need to be engineered out of a new design.

With this tangible data, a Solutions Architect can help you look for ways to optimize your data center and plan for upcoming critical projects. During this conversation, you will also be supplied with a detailed report and the resulting accumulated metrics and what they mean for your business. As your total solution partner, Dell will work with you every step of the way to make the most impactful business decisions from the results.

Dell provides virtual storage and back office offerings to enhance your current IT environment to best meet your ever-evolving business needs. Through the use of DPACK and the support of your account team, you will gain quick and impactful insight to your environment that will help you make the right decisions for your business.

Get started

DPACK is intended to be a lightweight assessment that is generally concluded in 24 hours or less, start to finish. Access to DPACK is easy! You can begin the process by downloading the OS specific collectors you need for your environment at this link: Dell.com/downloadDPACK Once unzipped you can just double click to begin adding remote servers. DPACK does not install, it simply runs in memory during the collection process. We recommend running DPACK from the most modern OS in your environment, for example Windows Server 2008 R2

Sample Output Summary

DPACK_summary