Install solaris volume manager




















Dual-string configurations only Configure dual-string mediator hosts, check the status of mediator data, and, if necessary, fix bad mediator data.

The following example helps to explain the process for determining the number of disk drives to place in each diskset. In this example, three storage devices are used. The following table shows the calculations that are used to determine the number of disk drives needed in the sample configuration. In a configuration with three storage devices, you would need 28 disk drives, which would be divided as evenly as possible among each of the three storage devices.

Note that the 5-Gbyte file systems were given an additional 1 Gbyte of disk space because the number of disk drives needed was rounded up. The following table shows the allocation of disk drives among the two disksets and four data services.

Initially, four disk drives on each storage device a total of 12 disks are assigned to dg-schost-1 , and five or six disk drives on each a total of 16 disks are assigned to dg-schost No hot spare disks are assigned to either diskset. A minimum of one hot spare disk per storage device per diskset enables one drive to be hot spared, which restores full two-way mirroring.

Instead, go to Mirroring the Root Disk. If you installed Solaris 9 software, do not perform this procedure. Solaris Volume Manager software is installed with Solaris 9 software.

Have available the following information. Mappings of your storage disk drives. The following completed configuration planning worksheets. See Planning Volume Management for planning guidelines. Install the Solstice DiskSuite software packages in the order that is shown in the following example. If you have Solstice DiskSuite software patches to install, do not reboot after you install the Solstice DiskSuite software.

See your Solstice DiskSuite installation documentation for information about optional software packages. Repeat Step 1 through Step 6 on the other nodes of the cluster. From one node of the cluster, manually populate the global-device namespace for Solstice DiskSuite.

This procedure describes how to determine the number of Solstice DiskSuite metadevice or Solaris Volume Manager volume names and disksets that are needed for your configuration. The default number of metadevice or volume names per diskset is , but many configurations need more than the default. Increase this number before you implement a configuration, to save administration time later.

Determine the total number of disksets you expect to need in the cluster, then add one more diskset for private disk management. The cluster can have a maximum of 32 disksets, 31 disksets for general use plus one diskset for private disk management. The default number of disksets is 4. Determine the largest metadevice or volume name you expect to need for any diskset in the cluster.

Each diskset can have a maximum of metadevice or volume names. You supply this value for the nmd field in Step 4. If you use local metadevices or volumes, ensure that each local metadevice or volume name is unique throughout the cluster and does not use the same name as any device-ID name in the cluster.

Choose a range of numbers to use exclusively for device-ID names and a range for each node to use exclusively for its local metadevice or volume names. For example, device-ID names might use the range from d1 to d Local metadevices or volumes on node 1 might use names in the range from d to d And local metadevices or volumes on node 2 might use d to d The quantity of metadevice or volume names to set is based on the metadevice or volume name value rather than on the actual quantity.

Set the nmd field to the value that was determined in Step 3. On each node, perform a reconfiguration reboot. Create state database replicas on one or more local disks for each cluster node by using the metadb command. Also, you can place replicas on more than one disk to provide protection if one of the disks fails. Verify the replicas. The following example shows three Solstice DiskSuite state database replicas. Each replica is created on a different disk. For Solaris Volume Manager, the replica size would be larger.

Mirroring the root disk prevents the cluster node itself from shutting down because of a system disk failure. Four types of file systems can reside on the root disk. Each file-system type is mirrored by using a different method.

Use the following procedures to mirror each type of file system. Some of the steps in these mirroring procedures can cause an error message similar to the following, which is harmless and can be ignored. If you specify this path for anything other than cluster file systems, the system cannot boot. Use the metainit 1M command to put the root slice in a single-slice one-way concatenation. Specify the physical disk name of the root-disk slice c N t X d Y s Z.

Create a second concatenation. Create a one-way mirror with one submirror. This command flushes all transactions out of the log and writes the transactions to the master file system on all mounted UFS file systems. Move any resource groups or device groups from the node. Use the metattach 1M command to attach the second submirror to the mirror. Is the disk that is used to mirror the root disk physically connected to more than one node multiported? If no, proceed to Step If yes, perform the following steps to enable the localonly property of the raw-disk device group for the disk used to mirror the root disk.

You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes. If necessary, use the scdidadm 1M -L command to display the full device-ID path name of the raw-disk device group. Remove all nodes from the node list for the raw-disk device group except the node whose root disk you mirrored.

Only the node whose root disk you mirrored should remain in the node list. Use the scconf 1M command to enable the localonly property. When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes. Record the alternate boot path for possible future use. If the primary boot device fails, you can then boot from this alternate boot device.

Repeat Step 1 through Step 11 on each remaining node of the cluster. The following example shows the creation of mirror d0 on the node phys-schost-1 , which consists of submirror d10 on partition c0t0d0s0 and submirror d20 on partition c2t2d0s0. Disk c2t2d0 is a multiported disk, so the localonly property is enabled. Use the physical disk name of the disk slice c N t X d Y s Z.

This attachment starts a synchronization of the submirrors. Replace the names in the device to mount and device to fsck columns with the mirror name. Repeat Step 1 through Step 6 on each remaining node of the cluster. Wait for the synchronization of the mirrors, started in Step 5 , to complete. Use the metastat 1M command to view mirror status and to verify that mirror synchronization if complete.

Is the disk that is used to mirror the global namespace physically connected to more than one node multiported? If yes, perform the following steps to enable the localonly property of the raw-disk device group for the disk used to mirror the global namespace. If necessary, use the scdidadm 1M command to display the full device-ID path name of the raw-disk device group. Output looks similar to the following.

Remove all nodes from the node list for the raw-disk device group, except the node whose disk is mirrored. Only the node whose disk is mirrored should remain in the node list. New volumes are dynamically created, as needed. Instead, go to Mirroring the Root Disk. This procedure describes how to determine the number of Solstice DiskSuite metadevice or Solaris Volume Manager volume names and disk sets that are needed for your configuration.

The default number of metadevice or volume names per disk set is , but many configurations need more than the default. Increase this number before you implement a configuration, to save administration time later. Calculate the total number of disk sets that you expect to need in the cluster, then add one more disk set for private disk management. The cluster can have a maximum of 32 disk sets, 31 disk sets for general use plus one disk set for private disk management.

The default number of disk sets is 4. Calculate the largest metadevice or volume name that you expect to need for any disk set in the cluster. Each disk set can have a maximum of metadevice or volume names. You supply this value for the nmd field in Step 3. Choose a range of numbers to use exclusively for device-ID names and a range for each node to use exclusively for its local metadevice or volume names.

For example, device-ID names might use the range from d1 to d Local metadevices or volumes on node 1 might use names in the range from d to d And local metadevices or volumes on node 2 might use d to d The quantity of metadevice or volume names to set is based on the metadevice or volume name value rather than on the actual quantity. For example, if your metadevice or volume names range from d to d , Solstice DiskSuite or Solaris Volume Manager software requires that you set the value at names, not Failure to follow this guideline can result in serious Solstice DiskSuite or Solaris Volume Manager errors and possible loss of data.

Set the nmd field to the value that you determined in Step 2. Create local state database replicas. To provide protection of state data, which is necessary to run Solstice DiskSuite or Solaris Volume Manager software, create at least three replicas for each node. Also, you can place replicas on more than one device to provide protection if one of the devices fails.

The following example shows three Solstice DiskSuite state database replicas. Each replica is created on a different device. For Solaris Volume Manager, the replica size would be larger. To mirror file systems on the root disk, go to Mirroring the Root Disk. Mirroring the root disk prevents the cluster node itself from shutting down because of a system disk failure. Four types of file systems can reside on the root disk.

Each file-system type is mirrored by using a different method. If you specify this path for anything other than cluster file systems, the system cannot boot. Specify the physical disk name of the root-disk slice c N t X d Y s Z. This command flushes all transactions out of the log and writes the transactions to the master file system on all mounted UFS file systems.

Use the metattach 1M command to attach the second submirror to the mirror. If the disk that is used to mirror the root disk is physically connected to more than one node multihosted , enable the localonly property. Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the root disk. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

If necessary, use the scdidadm 1M -L command to display the full device-ID path name of the raw-disk device group. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored. Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.

Use the scconf 1M command to enable the localonly property. When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes. Record the alternate boot path for possible future use. If the primary boot device fails, you can then boot from this alternate boot device.

Repeat Step 1 through Step 11 on each remaining node of the cluster. The following example shows the creation of mirror d0 on the node phys-schost-1 , which consists of submirror d10 on partition c0t0d0s0 and submirror d20 on partition c2t2d0s0. Device c2t2d0 is a multihost disk, so the localonly property is enabled. RAID 0 stripe and concatenation 1. Creating a concatenation from slice S2 of 3 disks :. In case of mirroring root partition you need to follow few more steps. To grow a metadevice we need to attach a slice to the end and then grow the underlying filesystem:.

So when there is a need of hot spare disk smallest capable disk will be used from the host spare pool to replace the failed one. Below are some of the troubleshooting commands.

Use the -s [metaset] option when using the commands on the metasets.



0コメント

  • 1000 / 1000