Storage Arrays which are feature-rich RAID arrays that provide highly optimized I/O processing capabilities are generally referred as Intelligent Storage Arrays or Intelligent Storage Systems. These intelligent storage systems have the capability to meet the requirements of today’s I/O intensive next generation applications. These applications require high levels of performance, availability, security, and scalability. Therefore, to meet the requirements of the applications many vendors of intelligent storage systems now support SSDs, encryption, compression, deduplication, and scale-out architecture.
The use of SSDs and scale-out architecture enable to service massive number of IOPS. These storage systems also support connectivity to heterogeneous compute systems. Further, the intelligent storage systems support APIs to enable integration with Software-Defined Data Cneter (SDDC) and cloud environments.
Intelligent Storage Systems Overview
These storage systems have an operating system that intelligently and optimally handles the management, provisioning, and utilization of storage resources. The storage systems are configured with a large amount of memory called cache and multiple I/O paths and use sophisticated algorithms to meet the requirements of performance-sensitive applications. An intelligent storage system has two key components, controller and storage.
A controller is a compute system that runs a purpose-built operating system that is responsible for performing several key functions for the storage system. Examples of such functions are serving I/Os from the application servers, storage management, RAID protection, local and remote replication, provisioning storage, automated tiering, data compression, data encryption, and intelligent cache management.
An intelligent storage system typically has more than one controller for redundancy. Each controller consists of one or more processors and a certain amount of cache memory to process a large number of I/O requests. These controllers are connected to the servers either directly or via a storage network. The controllers receive I/O requests from the servers that are read or written from/to the storage by the controller.
Based on the type of data access, a storage system can be classified as block-based storage system, file-based storage system, object-based storage system, and unified storage system. A unified storage system provides block-based, file-based, and object-based data access in a single system. These are described in the next posts.
Architecture of Intelligent Storage Systems
An intelligent storage system may be built either based on scale-up or scale-out architecture.
A scale-up storage architecture provides the capability to scale the capacity and performance of a single storage system based on requirements. Scaling up a storage system involves upgrading or adding controllers and storage. These systems have a fixed capacity ceiling, which limits their scalability and the performance also starts degrading when reaching the capacity limit.
A scale-out storage architecture provides the capability to maximise its capacity by simply adding nodes to the cluster. Nodes can be added quickly to the cluster, when more performance and capacity is needed, without causing any downtime. This provides the flexibility to use many nodes of moderate performance and availability characteristics to produce a total system that has better aggregate performance and availability. Scale-out architecture pools the resources in the cluster and distributes the workload across all the nodes. This results in linear performance improvements as more nodes are added to the cluster.
Features of an Intelligent Storage Systems
Storage Tiering Storage tiering is a technique of establishing a hierarchy of different storage types (tiers). This enables storing the right data to the right tier, based on service level requirements, at a minimal cost. Each tier has different levels of protection, performance, and cost.
This technique allows us to place data on the most appropriate tier of storage. It helps to place frequently accessed data on fast media and inactive data on slow media. This can improve the performance of the storage array and bring costs down by not having to fill array with fast disks when most of the data is relatively infrequently accessed. This movement of data happens based on defined tiering policies. The tiering policy might be based on parameters, such as frequency of access.
For example, high performance solid-state drives (SSDs) or FC drives can be configured as tier 1 storage to keep frequently accessed data and low cost SATA drives as tier 2 storage to keep the less frequently accessed data. Keeping frequently used data in SSD or FC improves application performance. Moving less-frequently accessed data to SATA can free up storage capacity in high performance drives and reduce the cost of storage.
The process of moving the data from one type of tier to another is typically automated. In automated storage tiering, the application workload is proactively monitored; the active data is automatically moved to a higher performance tier and the inactive data is moved to higher capacity, lower performance tier. The data movement between the tiers is performed non-disruptively.
Redundancy Redundancy feature in a intelligent storage system ensure that failed components do not interrupt the operation of the array. Even at the host level, multiple paths are usually configured between the host and storage in multipath I/O configurations, ensuring that the loss of a path or network link between the host and storage array does not take the system down.
Storage system based replication makes remote copies of production volumes that can play a vital role in disaster Recovery (DR) and business continuity (BC) planning. Depending on the application and business requirements, remote replicas can either be zero-loss synchronous replicas. Asynchronous replication technologies can have thousands of miles between the source and target volumes but synchronous replication requires the source and target to be not more than 100 miles.
Thin provisioning technologies can be used to more effectively utilise the capacity in the storage systems. Over provisioning of storage space would eventually result in running out of available space.