Sub-LUN Auto-Tiering is another storage Capacity Optimization Technique which helps organization to tier different types of storage disks based on their speed, cost and performance. Before trying to understand what Sub-LUN Auto-Tiering does, lets first understand the purpose of Auto-Tiering concept.
Auto-Tiering is already used in most of the disk based storage arrays which deploys mutiple tiers of HDD’s and flash storage. Unlike deduplication and compression technologies, it will not allow to increase the usable capacity but it allows to optimize the use of storage resources. Auto-tiering helps to ensure that data sits on the right tier of storage i.e the frequently accessed data will reside on fast media such as flash memory and infrequently accessed data to reside on slower, cheaper media such as SATA HDD drives. This will ensure that expensive and high performance disks are used for storing production critical data and slower and cheaper disks are used for storing non production data.
Need of a Sub-LUN Auto-Tiering
Auto-Tiering works at the volume level tiering operations such as moving up and down the tiers. For example, If 1GB of 50GB volume is frequently accessed, then the whole 50GB has to be moved to the higher tier because of auto-tiering. This unwanted promotion to a higher storage tier is waste of resources. This is where Sub-LUN auto-tiering comes into picture.
Sub-LUB auto-Tiering does not move the entire volume but it works by slicing and dicing volumes into smaller extents. So, in this case only the part of the entire volume will be moved up the tier. Sub-LUN auto-tiering only moves the extents associated with those areas of the volume need to move. So incase of our previous example, 50GB volume is broken into multiple extents each of which is 16MB. So by using Sub-LUN auto-tiering, only 16MB needs to be moved up to the tier.
Some mid-range arrays have large extent sizes, in the range of 1 GB. Moving such 1GB extents is better than having to move an entire volume. Smaller extent sizes also require less system resources to move around the tiers. For example, moving 1 GB of data up or down a tier takes a lot more time and resources than moving a handful of 16 MB extents. However, small extent sizes have a trade-off too. They require more system resources, such as memory, to keep track of them.
Sub-LUN Auto Tiering Overview
Most of the tiered storage configurations used 3 types of tiered storage.
- Tier 1 will be for solid-state storage (flash memory)
- Tier 2 will be for fast spinning disk such as 10K or 15K SAS drives
- Tier 3 will be for big, slow SATA or NL-SAS drives.
However, these are general considerations and an organization should work closely with storage vendors and technology partners to determine what is right for their environment.
Tiering Activity Monitoring
To make tiering-related decisions such as which extents to move up the tiers and which to move down the tiers, a storage array needs to monitor activity on the array. This monitoring usually involves keeping track of which extents on the backend are accessed the most and which are accessed the least However, the times of day when you monitor this activity can be crucial to getting the desired tiering results. For example, many organizations choose to monitor data only during core business hours, when the array is servicing criticial business applications. By doing this, you are ensuring that the extents that move up to the highest tiers are the extents that your key criticial applications use.
But if you monitor the system outside business hours for example when your backups are running you may end up with a situation where the array moves the extents relating to backup operations to the highest tiers. In most situations, you won’t want backup-related volumes and extents occupying the expensive high-performance tiers of storage, while your critical business applications use the cheaper, lower-performance tiers of storage.
Scheduling Tiering Process
Similar to fine-tuning and custom-scheduling monitoring windows, the same can be done for scheduling times when data can move between tiers. As with creating custom monitoring periods, defining periods when data can move between tiers should be customized to your business because moving data up and down tiers consumes system resources, resources that you probably don’t want consumed by tiering operations in the middle of the business day when your applications need those resources to service customer. A common approach to scheduling tiering operations such as moving data up and down through the tiers is to have them occur outside business hours for example, from 3 a.m. to 5 a.m when most of the systems are quiet or are being backed up.
Defining Policies for Tiering
In addition to customizing monitoring and scheduling the movement between the tiers, it is also important to configure policies for tiering. Some common tiering policies are
- Exclusions: Exclusions allow to exclude certain volumes from all tiering operations. They also allow to exclude certain volumes from occupying space in certain tiers. One example of this situation is when you dont want the backup data to be occupy tier 1 and tier 2.
- Limits: Limits allow you to limit how much of a certain tier a particular volume or set of volumes can occupy.
- Priorities: Priorities allow you to prioritize certain volumes over others when there is contention for a particular tier. One example of this situation is that when the production data is competing with the research data to move up the tier. in this situation, a priority can be placed which gives those volumes preferences over other volumes and increasing the chances that the volumes associated with the production get on the higher tier.
Limitations of Storage Auto Tiering Technique
Array Size Concerns
Unfortunately, auto-tiering solutions don’t suit every scenario. One scenario where they may not be the best choice is smaller arrays and smaller storage systems. A major reason for that is the number of spindles (drives) each tier of storage has. For example, a small storage array with 32 drives installed might be well suited to a single tier of storage configured into a single pool, meaning that all volumes have access to all 32 drives. If you try to divide such a small system into three tiers of storage (for example, four flash drives, twelve 15K drives, and sixteen SATA drives), you may find that each tier has too few drives to make any tier performant enough, and the benefits of the flash storage may be outweighed by the backend operations required to move data up and down the tiers. Also, auto-tiering is sometimes a licensed feature, adding cost to an array. Often this additional cost isn’t beneficial on small arrays.
As a general rule, instead of utilizing auto-tiering on smaller arrays, you may be better off having a single tier of medium- to high-performance drives such as 10K SAS drives, maybe supplemented with a flash cache, where flash drives are utilized as a second level of cache rather than a persistent tier of storage.
Inadequate Remote Replication
An often unconsidered side-effect of auto-tiering is the potential imbalance between source and target arrays that are replicated. For example, in a replicated configuration where one array is considered the primary array and the other array considered the secondary array, the primary array sees all of the read and write workload. As a result of this read and write workload, the autotiering algorithms of the array move extents up and down the tiers, ensuring an optimal layout.
However, the target array does not see the same workload. All the target array sees is replicated I/O, and only write I/Os are replicated from the source to the target. Read I/Os are not replicated. As a result, how a volume is spread across the tiers on the target array will often be very different from how it’s spread across the tiers on the source array. This frequently results in a volume being on lower-performance tiers on the target array. This can be a problem if you ever have to bring systems and applications up on the target array, because they may perform lower than they did on the source array.
Go To >> Index Page