HPlogo HP-UX Reference Volume 3 of 5 > a

arraytab(4)

» 

Technical documentation

Complete book in PDF

 » Table of Contents

 » Index

NAME

arraytab — disk array configuration table

DESCRIPTION

Arraytab is a table of supported configurations for HP SCSI disk array products. Each table entry includes a set of parameter values that specify an array configuration. The array configuration table is located in /etc/hpC2400/arraytab.

HP SCSI disk array devices are highly configurable. The physical disk mechanisms in an array can be grouped in special ways to provide various levels of data redundancy, and data read/write performance. These levels are known as RAID (for Redundant Array of Inexpensive Disks) levels.

Using a process called striping, data from each read or write operation can be distributed across multiple physical disk mechanisms to provide load balancing and/or to add data redundancy for protection against the failure of physical disk mechanisms. Striping is done in increments of the physical disk block size for all RAID levels except RAID_3 (which uses byte striping). The stripe size, also known as segment size, establishes the degree of data spread across the set of disk mechanisms.

Logical disks are created by defining address regions that include all or part of the address space of a disk group. Each logical disk are separately addressable. For example:

PhysicalPhysicalBlockDrive  
Address123  
0XXX| 
XXX|Logical Drive 0 
 XXX| 
.YYY| 
.YYY|Logical Drive 1
.YYY| 
 ZZZ| 
 ZZZ|Logical Drive 2
NZZZ| 

In this example, 3 physical drives have been grouped into a single RAID group (1 vertical partition). Three logical disks have then been formed by partitioning the composite logical address space (in blocks) into 3 logical regions.

A logical configuration which has more than one logical partition per physical disk group is called a sub-LUN. If the logical partition includes the entire address space of the disk group, the logical partition is called a regular LUN.

Each array configuration requires two types of specifications—physical specifications, and logical specifications. A physical specification determines which disk mechanisms form the groups. A logical configuration specifies the type and location of each physical disk mechanism (in the array) that is to be used within the logical partition. The logical configuration also specifies the size and characteristics of the logical partition.

Raid Levels

The disk array can be configured using one of the following RAID levels, depending on the I/O requirements of the system, and the degree of data availability required. Data availability (redundancy) is achieved at the expense of storage capacity, and possibly performance.

RAID_0:

This level provides no data redundancy, however disks may be grouped in a set, and data striped across the disk set to provide load balancing.

A special case exists when a drive group of size 1 is defined (independent mode). In this case the physical disk mechanisms appear to the system as they would if there were no array controller. The array controller is transparent, providing only address selection among the disks connected to it. When configured in this manner the disks operate independently for every I/O request.

RAID_1:

This level provides disk mirroring. Two sets of disks maintain identical copies of the data. By choosing the number of disks in each set larger than one, data can be striped across the disks in each set (RAID_0) to provide better load balancing; the redundant disk sets provide availability.

RAID_3:

This level uses byte striping across a set of n drives, with an additional drive maintaining an XOR parity check byte for each byte of data. The resulting logical disk sector size is n times the sector size of one disk. Data can be recovered, if a drive fails, by using the redundancy of the parity drive while operating in a ``degraded'' mode. Since reads and writes to the individual mechanisms are accomplished in parallel, long I/O requests to the array complete in 1/nth the time, exclusive of the access time, allowing higher bandwidth I/O rates. Because the mechanisms operate in concert during the input/output operation, only one I/O may process at a time. Disks configured in RAID_3 have access time characteristics of a single disk, but are capable of transferring data at higher rates. This mode is most useful with long I/O requests.

RAID_5:

This level uses block striping across a set of n drives. XOR parity information is maintained across the set of the drives on a block basis, such that the failure of any one drive allows continued operation in a ``degraded'' mode. While degraded, data from the failed drive is reconstructed from the parity information, and the data on the remaining disks. Unlike RAID_3, block sizes can be the same as for a single disk; however, write performance suffers when write requests are less than n blocks, because read-modify-write operations must be done on the data drive, and the parity drive. Because the XOR parity data is maintained on a block basis, drive mechanisms can operate independently, allowing multiple I/O requests to process concurrently on the set of disks. This mode is most useful for short I/O requests. This mode allows parallel processing of I/Os requests across the set of disks, however data transfer rates are equivalent to those of a single disk.

CONFIGURATION TABLE

Entries in the configuration table are formed from a number of fields, each terminated by a ``:'' character. The fields are organized as shown below:

Drive Group Name (Physical Configuration Name) Drive List . . . Drive List Logical Configuration Name Logical partition configuration Logical partition configuration . . . Logical partition configuration

Each part of the specification is terminated by a 'New Line' character. The fields are generally composed of an identifier token, followed by parameter value or values, separated by ``#''. Comments may also be placed within the file by leading the field with ``#''. All following characters up to 'New Line' will be ignored. A character may be escaped by immediately preceding it with ``\''. Logical configurations and physical configurations may appear in any order, provided the syntax requirements are met. Physical disk configuration labels must be unique within the table. Logical configuration labels need not be unique. However, configurations with non-unique labels should have different parameter values for the array controller type field, or specify a different disk group. Logical disk configurations are searched sequentially—the first labeled specification which matches will be used. The following list describes the arraytable parameters and their use.

Name Type Description

ct str

Array Controller Type. This parameter must be specified in at least one logical partition of a logical configuration. The field consists of the concatenated vendor ID and product ID strings which are returned by the SCSI Inquiry message to the array controller, with ``_'' separating these two strings. This field defines array product for which this configuration may be used. For example, HP_C2425D or HP_C2430D. dl num Physical Drive list. Each drive group consists of 1 or more lists of disk mechanisms, each specified by the array channel number, the channel ID of the disk mechanism on the channel, and a disk identifier label, respectively. A drive list may have up to 5 drives listed. The order of the drives in the list determines the order in which data is placed on the drives. This order is defined by the drive sequence label dN, where N is a number from 0 to 4. Subsequent lists may be used to create drive groups larger than 5 disks. The disk identifier label is a string formed from the vendor ID and product ID strings returned from a SCSI Inquiry message, separated by ``_''. Certain constraints are made for the drive groups and drive lists, depending upon the number of drives and the RAID level chosen. See restrictions below.

lp num

Logical partition within the logical configuration. A logical configuration will have one or more logical partitions, with each logical partition consisting of a portion or the whole of a drive group (See LUN type). Address space is allocated to each logical partition in the order in which it is found in the table, and begin start from the beginning block of the disk group. A logical partition number corresponds with the SCSI logical unit (LUN) number.

lt str

Logical partition or LUN type. A logical partition may be either ``regular LUN'' (reg) or ``sub-LUN'' (sub). A sub-LUN allows configuring multiple logical disks for a group of disks, each to an arbitrary capacity. A regular LUN allows a logical disk capacity of the composite disk capacity of a group of drives, or 2 GByte, whichever is smaller. When the regular LUN option is used, the capacity parameter is ignored by the array controller. Additional logical drives may be configured to use the remaining capacity beyond 2 GByte if the regular LUN mode is chosen.

bs num

Block size of the logical partition or LUN in bytes. This value must be specified in increments of the native disk mechanism sector size. Currently supported values are 512, 1024, 2048, 4096 bytes.

cv num

Capacity of the logical partition or LUN in blocks. If this value is set to 0, the array will configure as many blocks as are available (not previously configured in another LUN).

ss num

Segment size. The size in bytes of a contiguous segment of the logical address space which will reside on a single physical disk. This allows controlling how many disks are involved with a single I/O request. If I/O requests are mostly random, single block requests, this value should be set to the block size. If the I/O requests are typically more than a single sequential block, then this value should be set to the number of bytes which minimizes the number of disks necessary to service most I/Os. The value must be an integral number of the block size.

is num

The size in bytes of the first segment of the LUN. This allows this area to be set to a size different than the remainder of the disk, an area typically used as the boot block for some systems. This must be an integral number of the block size. If there are no special requirements, this parameter should be set to 0.

rl str

RAID level. Acceptable strings are { RAID_0, RAID_1, RAID_3, RAID_5}. The RAID modes are described above.

gn str

Group name. This is the label used to identify the physical drive group or configuration to be used with the logical configuration.

gs num

Number of physical drives in the drive group.

rs num

Reconstruction size. This is the number of logical disk blocks which will be reconstructed in one operation when a drive data set is being repaired. A larger value will cause the reconstruction to complete more quickly (and efficiently), but will cause longer delays in processing other I/O requests.

rf num

Reconstruction frequency. This is the period of time between reconstruction operations, specified in 0.1 Sec. (see Reconstruction Size). This parameter is useful in systems which do not do I/O request queuing to allow I/Os to process smoothly while reconstructing the data set.

lf num

LUN configuration flags. There are 16 possible LUN configuration flags. Currently only 6 of these flags are defined. It is not recommended that these fields be altered. The flags are used to enable certain features of the array controller for the specified LUN. The flags may be set by specifying the hexadecimal value for all the flags. The flags are defined as follows:

Bit 0 off

Not used.

Bit 1 on

Automatic reconstruction disable. Enabled allows the array controller to automatically begin data restruction when the replacement of a failed disk is detected.

Bit 2 off

Not used.

Bit 3 off

Not used.

Bit 4 on

Asynchronous Event Notification polling enable.

Bit 5 on

Parity verification enable.

Bit 6 on

Write with parity verification enable.

Bit 7 off

Not used.

Bit 8 off

Mode Sense: Current. Current values are accessed during mode sense. This bit should not be set concurrently with Bit 9.

Bit 9 off

Mode Sense: Saved. Saved values are accessed during mode sense. This bit should not be set concurrently with Bit 10.

Bit 10-15 off

Not used.

RAID LEVEL RESTRICTIONS:

The following restrictions apply to RAID configurations for the array:

RAID_0:
  • No disk list may contain more than 1 disk per channel

  • For groups larger than 5 disks, additional lists are defined and data is accessed in the order of definition.

RAID_1:

In this mode the lists define the set of disks for data, and the set of disks which form the mirrored pair.

  • Two lists must be specified.

  • The two lists must be of equal length.

  • No list may contain more than 1 disk per channel

  • Corresponding entries in the two lists (these form a mirrored disk pair) cannot be on the same channel.

RAID_3:
  • There must be an odd number of disks in the disk list.

  • Disks in the disk list must be on separate channels.

  • The first disk of the set must be on channel 1, followed in order by the other channels. Thus a 3 disk set will use channels 1 through 3.

  • The disk on the last channel is the parity disk. (Channel 3 for 3 disk configuration, channel 5 for 5 disk configuration.)

  • Maximum configuration is 1 list of 5 disks.

RAID_5:
  • The disk list cannot contain more than 1 disk per channel.

  • Maximum configuration is 1 list of 5 disks.

EXAMPLE:

PGroup1: dl#0: d0#1#0#HP_02425: d1#2#0#HP_02425: d2#3#0#HP_02425: LConfig: lp#0: gs#3: gn#PGroup1: r#RAID_3: is#0: ss#8192:\ cv#204994: ct#HP_C2425D lp#1: gs#3: gn#PGroup1: r#RAID_3: is#0: ss#8192:\ cv#8192: ct:#HP_C2425D

FILE SYSTEM CONSIDERATIONS:

The performance of the disk array will depend heavily upon the RAID level used, and the application. In addition, the disk array configuration parameters should be chosen with consideration of the parameters used for the file system in use on the array.

WARNING:

The configurations found in /etc/hpC2400/arraytab have been chosen and certified by HP for proper operation on HP systems. Use of configurations other than these have NOT been certified for proper operation, and cannot be warranted.

For configurations using logical partitions exceeding 2 GB it is necessary that the 2 GB governor flag be turned off in the array controller. See see(1M).

DEPENDENCIES:

Series 700:

LUN address 6 and 7 are reserved for use with array management utilities, and should not be configured.

Series 800:

LUN address 6 and 7 are reserved for use with array management utilities, and should not be configured.

Only RAID levels 0 (Independent), 3, and 5 are supported.

RAID 0 configurations must span only a single disk (Independent mode) and result in separate addressable logical partitions, one for each physical disk.

RAID 3 and RAID 5 configurations must result in a single logical partition, which span all disks on the array.

AUTHOR:

arraytab was developed by HP.

FILES

/etc/hpC2400/arraytab

SEE ALSO

newarray(1M), mkfs(1M), buildfs(1M), cfl(1M), fs(4), see(1M).

© Hewlett-Packard Development Company, L.P.