- Việc cấu hình Multi-path cho FC card trên server giúp việc truy cập data trên server sẽ an toàn hơn trong trường hợp một trong 2 đường kết nối từ FC card trên server bị disconnect.
- Cấu hình multi-path cho FC card bằng lệnh sau:
# stmsboot -e
WARNING: stmsboot operates on each supported multipath-capable controller detected in a host. In your system, these controllers are
/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@1/fp@0,0
/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@2/fp@0,0
/pci@780/pci@0/pci@9/scsi@0
If you do NOT wish to operate on these controllers, please quit stmsboot and re-invoke with -D { fp | mpt | mpt_sas} to specify which controllers you wish to modify your multipathing configuration for.
Do you wish to continue? [y/n] (default: y)
- Việc cấu hình này cần reboot lại server để tính năng Multi-path được apply.
- Kiểm tra cấu hình Multi-path:
# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
Node WWN:200400a0b84700ce Device Type:Disk device
Logical Path:/dev/rdsk/c3t201400A0B84700CEd31s2
Node WWN:200400a0b84700ce Device Type:Disk device
Logical Path:/dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
#
#
# luxadm display
/dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
Vendor: STK
Product ID: FLEXLINE 380
Revision: 0660
Serial Num: SP74542576
Unformatted capacity: 512000.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x3
Maximum prefetch: 0x3
Device Type: Disk device
Path(s):
/dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
/devices/scsi_vhci/ssd@g600a0b8000330d0a00001a964ed581dd:c,raw
Controller /devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@2/fp@0,0
Device Address 201400a0b84700ce,0
Host controller port WWN 10000000c9c25d7d
Class secondary
State STANDBY
Controller /devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@1/fp@0,0
Device Address 201500a0b84700ce,0
Host controller port WWN 10000000c9c25de9
Class primary
State ONLINE
#
- Từ kết quả trên, ta dễ dàng nhận thấy rằng volume được control bằng 2 đường FC, với trạng thái “Primary” là “Online” và “secondary” là “Standby”.
Tuesday, July 31, 2012
Running an SSH Server on Multiple Ports
It's pretty easy to do on your Linux box. These instructions are
tested on OpenSuse 10.1 but they should work equally well on any Linux.
On the machine that's running sshd, the ssh server, edit
/etc/ssh/sshd_config. In it you'll see one directive on each line.
Here's a snippet:
The lines that have no # in front of them are directives. They tell sshd what you want it to do for any given option. So a line like
After you edit sshd_config and save it, you have to restart the ssh server in order for your changes to take effect. If you're making the changes while logged in on an ssh shell (i.e. somewhere other than in front of the computer running sshd) be aware that you may lose your connection when you restart (you should also to the end of this post before restarting). I restart sshd like this:
In these lines, the ones that start with a # don't do anything - they're comments for your reference. Often sshd_config has default values for many of the most common options included with a # in front of them. So you might have a line like#AllowTcpForwarding yes
GatewayPorts yes
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
With the # it doesn't do anything. Since 22 is the default value for Port, sshd will behave the same if you have no Port directive at all or if you have this comment.#Port 22
The lines that have no # in front of them are directives. They tell sshd what you want it to do for any given option. So a line like
Tells sshd to listen for connections on Port 22. The ssh server accepts multiple Port directives and will listen on multiple ports if you want it to. If you want to have sshd listen on ports 22, 80 and 8122 you need lines like thisPort 22
Note that Port 80 is normally used by web servers - it's said to be a Well Known Port Number. Using Port 80 for ssh will let you use ssh to connect through most firewalls and proxies. If you decide to do this then make sure that you don't also have a web server trying to use port 80 for incoming connections. Port 32022 isn't reserved for anything (as far as I know) but a random hacker wouldn't connect to it as their first try for an ssh connection. Port numbers go up to sixty-something thousand.Port 22
Port 80
Port 32022
After you edit sshd_config and save it, you have to restart the ssh server in order for your changes to take effect. If you're making the changes while logged in on an ssh shell (i.e. somewhere other than in front of the computer running sshd) be aware that you may lose your connection when you restart (you should also to the end of this post before restarting). I restart sshd like this:
Once you've made the change and restarted, test your new configuration either from the console or another machine on your LAN. Supposing you used port 32022 you could test it locally like this:ruby:/etc/ssh # /etc/init.d/sshd restart
Shutting down SSH daemon done
Starting SSH daemon done
Managing ZFS Storage Pool Properties
You can use the zpool get command to display pool
property information. For example:
Storage pool properties can be set with the zpool set command.
For example:
Table 4–1 ZFS Pool Property Descriptions
# zpool get all u03 NAME PROPERTY VALUE SOURCE u03 size 68G - u03 capacity 0% - u03 altroot - default u03 health ONLINE - u03 guid 601891032394735745 default u03 version 22 default u03 bootfs - default u03 delegation on default u03 autoreplace off default u03 cachefile - default u03 failmode wait default u03 listsnapshots on default u03 autoexpand off default u03 free 68.0G - u03 allocated 76.5K - |
# zpool set autoreplace=on mpool # zpool get autoreplace mpool NAME PROPERTY VALUE SOURCE mpool autoreplace on default |
Property Name | Type | Default Value | Description |
---|---|---|---|
allocated | String | N/A | Read-only value that identifies the amount of storage space within the pool that has been physically allocated. |
altroot | String | off | Identifies an alternate root directory. If set, this directory is prepended to any mount points within the pool. This property can be used when you are examining an unknown pool, if the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid. |
autoreplace | Boolean | off | Controls automatic device replacement. If set to off, device replacement must be initiated by using the zpool replace command. If set to on, any new device found in the same physical location as a device that previously belonged to the pool is automatically formatted and replaced. The property abbreviation is replace. |
bootfs | Boolean | N/A | Identifies the default bootable dataset for the root pool. This property is typically set by the installation and upgrade programs. |
cachefile | String | N/A | Controls where pool configuration information is cached. All pools in the cache are automatically imported when the system boots. However, installation and clustering environments might require this information to be cached in a different location so that pools are not automatically imported. You can set this property to cache pool configuration information in a different location. This information can be imported later by using the zpool import -c command. For most ZFS configurations, this property is not used. |
capacity | Number | N/A | Read-only value that identifies the percentage of pool space used.
The property abbreviation is cap. |
delegation | Boolean | on | Controls whether a nonprivileged user can be granted access permissions that are defined for a dataset. For more information, see Chapter 9, Oracle Solaris ZFS Delegated Administration. |
failmode | String | wait | Controls the system behavior if a catastrophic pool failure occurs.
This condition is typically a result of a loss of connectivity to the underlying
storage device or devices or a failure of all devices within the pool. The
behavior of such an event is determined by one of the following values:
|
free | String | N/A | Read-only value that identifies the number of blocks within the pool that are not allocated. |
guid | String | N/A | Read-only property that identifies the unique identifier for the pool. |
health | String | N/A | Read-only property that identifies the current health of the pool, as either ONLINE, DEGRADED, FAULTED, OFFLINE, REMOVED, or UNAVAIL. |
listsnapshots | String | on | Controls whether snapshot information that is associated with this pool is displayed with the zfs list command. If this property is disabled, snapshot information can be displayed with the zfs list -t snapshot command. |
size | Number | N/A | Read-only property that identifies the total size of the storage pool. |
version | Number | N/A | Identifies the current on-disk version of the pool. The preferred method of updating pools is with the zpool upgrade command, although this property can be used when a specific version is needed for backwards compatibility. This property can be set to any number between 1 and the current version reported by the zpool upgrade -v command. |
Displaying Information About ZFS Storage Pools
You can use the zpool list command to display basic
information about pools.
This command output displays the following information:
The column names correspond to the properties that are listed in Listing Information About All Storage Pools or a Specific Pool.
Here is another example:
For example, the following syntax displays the command output for the root pool:
You can use similar output on your system to identify the actual ZFS
commands that were executed to troubleshoot an error condition.
The features of the history log are as follows:
Use the -l option to display a long format that includes
the user name, the host name, and the zone in which the operation was performed.
For example:
Use the -i option to display internal event information
that can be used for diagnostic purposes. For example:
Because these statistics are cumulative since boot, bandwidth might
appear low if the pool is relatively idle. You can request a more accurate
view of current bandwidth usage by specifying an interval. For example:
In this example, the command displays usage statistics for the pool tank every two seconds until you type Control-C. Alternately, you
can specify an additional count argument, which causes
the command to terminate after the specified number of iterations. For example, zpool iostat 2 3 would print a summary every two seconds for three
iterations, for a total of six seconds. If there is only a single pool, then
the statistics are displayed on consecutive lines. If more than one pool exists,
then an additional dashed line delineates each iteration to provide visual
separation.
Note two important points when viewing I/O statistics for virtual devices:
This section describes how to determine pool and device health. This chapter does not document how to repair or recover from unhealthy pools. For more information about troubleshooting and data recovery, see Chapter 11, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
Each device can fall into one of the following states:
Specific pools can be examined by specifying a pool name in the command
syntax. Any pool that is not in the ONLINE state should
be investigated for potential problems, as described in the next section.
This output displays a complete description of why the pool is in its
current state, including a readable description of the problem and a link
to a knowledge article for more information. Each knowledge article provides
up-to-date information about the best way to recover from your current problem.
Using the detailed configuration information, you can determine which device
is damaged and how to repair the pool.
In the preceding example, the faulted device should be replaced. After the device is replaced, use the zpool online command to bring the device online. For example:
If the autoreplace property is on, you might not
have to online the replaced device.
If a pool has an offline device, the command output identifies the problem pool. For example:
The READ and WRITE columns provide
a count of I/O errors that occurred on the device, while the CKSUM column
provides a count of uncorrectable checksum errors that occurred on the device.
Both error counts indicate a potential device failure, and some corrective
action is needed. If non-zero errors are reported for a top-level virtual
device, portions of your data might have become inaccessible.
The errors: field identifies any known data errors.
In the preceding example output, the offline device is not causing data errors.
For more information about diagnosing and repairing faulted pools and data, see Chapter 11, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
Listing Information About All Storage Pools or a Specific Pool
With no arguments, the zpool listcommand displays the following information for all pools on the system:
# zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - dozer 1.2T 384G 816G 32% ONLINE - |
- NAME
-
The name of the pool.
- SIZE
-
The total size of the pool, equal to the sum of the sizes
of all top-level virtual devices.
- ALLOC
-
The amount of physical space allocated to all datasets and
internal metadata. Note that this amount differs from the amount of disk space
as reported at the file system level.
For more information about determining available file system space, see ZFS Disk Space Accounting.
- FREE
-
The amount of unallocated space in the pool.
- CAP (CAPACITY)
-
The amount of disk space used, expressed as a percentage of
the total disk space.
- HEALTH
-
The current health status of the pool.
For more information about pool health, see Determining the Health Status of ZFS Storage Pools.
- ALTROOT
-
The alternate root of the pool, if one exists.
For more information about alternate root pools, see Using ZFS Alternate Root Pools.
# zpool list tank NAME SIZE ALLOC FREE CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - |
Listing Specific Storage Pool Statistics
Specific statistics can be requested by using the -o option. This option provides custom reports or a quick way to list pertinent information. For example, to list only the name and size of each pool, you use the following syntax:
# zpool list -o name,size NAME SIZE tank 80.0G dozer 1.2T |
Scripting ZFS Storage Pool Output
The default output for the zpool list command is designed for readability and is not easy to use as part of a shell script. To aid programmatic uses of the command, the -H option can be used to suppress the column headings and separate fields by tabs, rather than by spaces. For example, to request a list of all pool names on the system, you would use the following syntax:
# zpool list -Ho name tank dozer |
# zpool list -H -o name,size tank 80.0G dozer 1.2T |
Displaying ZFS Storage Pool Command History
ZFS automatically logs successful zfs and zpool commands that modify pool state information. This information can be displayed by using the zpool history command.For example, the following syntax displays the command output for the root pool:
# zpool history History for 'rpool': 2010-05-11.10:18:54 zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/tmp/root/etc/zfs/zpool.cache rpool mirror c1t0d0s0 c1t1d0s0 2010-05-11.10:18:55 zfs set canmount=noauto rpool 2010-05-11.10:18:55 zfs set mountpoint=/rpool rpool 2010-05-11.10:18:56 zfs create -o mountpoint=legacy rpool/ROOT 2010-05-11.10:18:57 zfs create -b 8192 -V 2048m rpool/swap 2010-05-11.10:18:58 zfs create -b 131072 -V 1536m rpool/dump 2010-05-11.10:19:01 zfs create -o canmount=noauto rpool/ROOT/zfsBE 2010-05-11.10:19:02 zpool set bootfs=rpool/ROOT/zfsBE rpool 2010-05-11.10:19:02 zfs set mountpoint=/ rpool/ROOT/zfsBE 2010-05-11.10:19:03 zfs set canmount=on rpool 2010-05-11.10:19:04 zfs create -o mountpoint=/export rpool/export 2010-05-11.10:19:05 zfs create rpool/export/home 2010-05-11.11:11:10 zpool set bootfs=rpool rpool 2010-05-11.11:11:10 zpool set bootfs=rpool/ROOT/zfsBE rpool |
The features of the history log are as follows:
-
The log cannot be disabled.
-
The log is saved persistently on disk, which means that the
log is saved across system reboots.
-
The log is implemented as a ring buffer. The minimum size
is 128 KB. The maximum size is 32 MB.
-
For smaller pools, the maximum size is capped at 1 percent
of the pool size, where the size is determined
at pool creation time.
-
The log requires no administration, which means that tuning
the size of the log or changing the location of the log is unnecessary.
# zpool history tank History for 'tank': 2010-05-13.14:13:15 zpool create tank mirror c1t2d0 c1t3d0 2010-05-13.14:21:19 zfs create tank/snaps 2010-05-14.08:10:29 zfs create tank/ws01 2010-05-14.08:10:54 zfs snapshot tank/ws01@now 2010-05-14.08:11:05 zfs clone tank/ws01@now tank/ws01bugfix |
# zpool history -l tank History for 'tank': 2010-05-13.14:13:15 zpool create tank mirror c1t2d0 c1t3d0 [user root on neo] 2010-05-13.14:21:19 zfs create tank/snaps [user root on neo] 2010-05-14.08:10:29 zfs create tank/ws01 [user root on neo] 2010-05-14.08:10:54 zfs snapshot tank/ws01@now [user root on neo] 2010-05-14.08:11:05 zfs clone tank/ws01@now tank/ws01bugfix [user root on neo] |
# zpool history -i tank 2010-05-13.14:13:15 zpool create -f tank mirror c1t2d0 c1t23d0 2010-05-13.14:13:45 [internal pool create txg:6] pool spa 19; zfs spa 19; zpl 4;... 2010-05-13.14:21:19 zfs create tank/snaps 2010-05-13.14:22:02 [internal replay_inc_sync txg:20451] dataset = 41 2010-05-13.14:25:25 [internal snapshot txg:20480] dataset = 52 2010-05-13.14:25:25 [internal destroy_begin_sync txg:20481] dataset = 41 2010-05-13.14:25:26 [internal destroy txg:20488] dataset = 41 2010-05-13.14:25:26 [internal reservation set txg:20488] 0 dataset = 0 2010-05-14.08:10:29 zfs create tank/ws01 2010-05-14.08:10:54 [internal snapshot txg:53992] dataset = 42 2010-05-14.08:10:54 zfs snapshot tank/ws01@now 2010-05-14.08:11:04 [internal create txg:53994] dataset = 58 2010-05-14.08:11:05 zfs clone tank/ws01@now tank/ws01bugfix |
Viewing I/O Statistics for ZFS Storage Pools
To request I/O statistics for a pool or specific virtual devices, use the zpool iostat command. Similar to the iostat command, this command can display a static snapshot of all I/O activity, as well as updated statistics for every specified interval. The following statistics are reported:- alloc capacity
-
The amount of data currently stored in the pool or device.
This amount differs from the amount of disk space available to actual file
systems by a small margin due to internal implementation details.
For more information about the differences between pool space and dataset space, see ZFS Disk Space Accounting.
- free capacity
-
The amount of disk space available in the pool or device.
As with the used statistic, this amount differs from the
amount of disk space available to datasets by a small margin.
- read operations
-
The number of read I/O operations sent to the pool or device,
including metadata requests.
- write operations
-
The number of write I/O operations sent to the pool or device.
- read bandwidth
-
The bandwidth of all read operations (including metadata),
expressed as units per second.
- write bandwidth
-
The bandwidth of all write operations, expressed as units
per second.
Listing Pool-Wide I/O Statistics
With no options, the zpool iostat command displays the accumulated statistics since boot for all pools on the system. For example:
# zpool iostat capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 6.05G 61.9G 0 0 786 107 tank 31.3G 36.7G 4 1 296K 86.1K ---------- ----- ----- ----- ----- ----- ----- |
# zpool iostat tank 2 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- tank 18.5G 49.5G 0 187 0 23.3M tank 18.5G 49.5G 0 464 0 57.7M tank 18.5G 49.5G 0 457 0 56.6M tank 18.8G 49.2G 0 435 0 51.3M |
Listing Virtual Device I/O Statistics
In addition to pool-wide I/O statistics, the zpool iostat command can display I/O statistics for virtual devices. This command can be used to identify abnormally slow devices or to observe the distribution of I/O generated by ZFS. To request the complete virtual device layout as well as all I/O statistics, use the zpool iostat -v command. For example:
# zpool iostat -v capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 6.05G 61.9G 0 0 785 107 mirror 6.05G 61.9G 0 0 785 107 c1t0d0s0 - - 0 0 578 109 c1t1d0s0 - - 0 0 595 109 ---------- ----- ----- ----- ----- ----- ----- tank 36.5G 31.5G 4 1 295K 146K mirror 36.5G 31.5G 126 45 8.13M 4.01M c1t2d0 - - 0 3 100K 386K c1t3d0 - - 0 3 104K 386K ---------- ----- ----- ----- ----- ----- ----- |
-
First, disk space usage statistics are only available for
top-level virtual devices. The way in which disk space is allocated among
mirror and RAID-Z virtual devices is particular to the implementation and
not easily expressed as a single number.
-
Second, the numbers might not add up exactly as you would
expect them to. In particular, operations across RAID-Z and mirrored devices
will not be exactly equal. This difference is particularly noticeable immediately
after a pool is created, as a significant amount of I/O is done directly to
the disks as part of pool creation, which is not accounted for at the mirror
level. Over time, these numbers gradually equalize. However, broken, unresponsive,
or offline devices can affect this symmetry as well.
Determining the Health Status of ZFS Storage Pools
ZFS provides an integrated method of examining pool and device health. The health of a pool is determined from the state of all its devices. This state information is displayed by using the zpool status command. In addition, potential pool and device failures are reported by fmd, displayed on the system console, and logged in the /var/adm/messages file.This section describes how to determine pool and device health. This chapter does not document how to repair or recover from unhealthy pools. For more information about troubleshooting and data recovery, see Chapter 11, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
Each device can fall into one of the following states:
- ONLINE
-
The device or virtual device is in normal working order. Although
some transient errors might still occur, the device is otherwise in working
order.
- DEGRADED
-
The virtual device has experienced a failure but can still
function. This state is most common when a mirror or RAID-Z device has lost
one or more constituent devices. The fault tolerance of the pool might be
compromised, as a subsequent fault in another device might be unrecoverable.
- FAULTED
-
The device or virtual device is completely inaccessible. This
status typically indicates total failure of the device, such that ZFS is incapable
of sending data to it or receiving data from it. If a top-level virtual device
is in this state, then the pool is completely inaccessible.
- OFFLINE
-
The device has been explicitly taken offline by the administrator.
- UNAVAIL
-
The device or virtual device cannot be opened. In some cases,
pools with UNAVAIL devices appear in DEGRADED mode.
If a top-level virtual device is UNAVAIL, then nothing
in the pool can be accessed.
- REMOVED
-
The device was physically removed while the system was running.
Device removal detection is hardware-dependent and might not be supported
on all platforms.
Basic Storage Pool Health Status
You can quickly review pool health status by using the zpool status command as follows:
# zpool status -x all pools are healthy |
Detailed Health Status
You can request a more detailed health summary status by using the -v option. For example:
# zpool status -v tank pool: tank state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: scrub completed after 0h0m with 0 errors on Wed Jan 20 15:13:59 2010 config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 UNAVAIL 0 0 0 cannot open errors: No known data errors |
In the preceding example, the faulted device should be replaced. After the device is replaced, use the zpool online command to bring the device online. For example:
# zpool online tank c1t0d0 Bringing device c1t0d0 online # zpool status -x all pools are healthy |
If a pool has an offline device, the command output identifies the problem pool. For example:
# zpool status -x pool: tank state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 15:15:09 2010 config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 OFFLINE 0 0 0 48K resilvered errors: No known data errors |
The errors: field identifies any known data errors.
In the preceding example output, the offline device is not causing data errors.
For more information about diagnosing and repairing faulted pools and data, see Chapter 11, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
Preparing for ZFS Storage Pool Migration
Storage pools should be explicitly exported to indicate that they are
ready to be migrated. This operation flushes any unwritten data to disk, writes
data to the disk indicating that the export was done, and removes all information
about the pool from the system.
If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear faulted on the original system because the devices are no longer present. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.
The command attempts to unmount any mounted file systems within the
pool before continuing. If any of the file systems fail to unmount, you can
forcefully unmount them by using the -f option. For example:
After this command is executed, the pool tank is
no longer visible on the system.
If devices are unavailable at the time of export, the devices cannot be identified as cleanly exported. If one of these devices is later attached to a system without any of the working devices, it appears as “potentially active.”
If ZFS volumes are in use in the pool, the pool cannot be exported, even with the -f option. To export a pool with a ZFS volume, first ensure that all consumers of the volume are no longer active.
For more information about ZFS volumes, see ZFS Volumes.
In this example, the pool tank is available to be
imported on the target system. Each pool is identified by a name as well as
a unique numeric identifier. If multiple pools with the same name are available
to import, you can use the numeric identifier to distinguish between them.
Similar to the zpool status command output, the zpool import output includes a link to a knowledge article with the most up-to-date information regarding repair procedures for the problem that is preventing a pool from being imported. In this case, the user can force the pool to be imported. However, importing a pool that is currently in use by another system over a storage network can result in data corruption and panics as both systems attempt to write to the same storage. If some devices in the pool are not available but sufficient redundant data exists to provide a usable pool, the pool appears in the DEGRADED state. For example:
In this example, the first disk is damaged or missing, though you can
still import the pool because the mirrored data is still accessible. If too
many faulted or missing devices are present, the pool cannot be imported.
For example:
In this example, two disks are missing from a RAID-Z virtual device,
which means that sufficient redundant data is not available to reconstruct
the pool. In some cases, not enough devices are present to determine the complete
configuration. In this case, ZFS cannot determine what other devices were
part of the pool, though ZFS does report as much information as possible about
the situation. For example:
If devices exist in multiple directories, you can specify multiple -d options.
If multiple available pools have the same name, you must specify which
pool to import by using the numeric identifier. For example:
If the pool name conflicts with an existing pool name, you can import
the pool under a different name. For example:
This command imports the exported pool dozer using
the new name zeepool.
If the pool was not cleanly exported, ZFS requires the -f flag to prevent users from accidentally importing a pool that is still in use on another system. For example:
Note – Do not attempt to import a pool that is active on one system to another system. ZFS is not a native cluster, distributed, or parallel file system and cannot provide concurrent access from multiple, different hosts.
Pools can also be imported under an alternate root by using the -R option. For more information on alternate root pools, see Using ZFS Alternate Root Pools.
In this zpool import output, you can identify the tank pool as the destroyed pool because of the following state information:
To recover the destroyed pool, run the zpool import -D command again with the pool to be recovered. For example:
If one of the devices in the destroyed pool is faulted or unavailable,
you might be able to recover the destroyed pool anyway by including the -f option. In this scenario, you would import the degraded pool and
then attempt to fix the device failure. For example:
If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear faulted on the original system because the devices are no longer present. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.
Exporting a ZFS Storage Pool
To export a pool, use the zpool export command. For example:# zpool export tank |
# zpool export tank cannot unmount '/export/home/eschrock': Device busy # zpool export -f tank |
If devices are unavailable at the time of export, the devices cannot be identified as cleanly exported. If one of these devices is later attached to a system without any of the working devices, it appears as “potentially active.”
If ZFS volumes are in use in the pool, the pool cannot be exported, even with the -f option. To export a pool with a ZFS volume, first ensure that all consumers of the volume are no longer active.
For more information about ZFS volumes, see ZFS Volumes.
Determining Available Storage Pools to Import
After the pool has been removed from the system (either through an explicit export or by forcefully removing the devices), you can attach the devices to the target system. ZFS can handle some situations in which only some of the devices are available, but a successful pool migration depends on the overall health of the devices. In addition, the devices do not necessarily have to be attached under the same device name. ZFS detects any moved or renamed devices, and adjusts the configuration appropriately. To discover available pools, run the zpool import command with no options. For example:# zpool import pool: tank id: 11809215114195894163 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE |
Similar to the zpool status command output, the zpool import output includes a link to a knowledge article with the most up-to-date information regarding repair procedures for the problem that is preventing a pool from being imported. In this case, the user can force the pool to be imported. However, importing a pool that is currently in use by another system over a storage network can result in data corruption and panics as both systems attempt to write to the same storage. If some devices in the pool are not available but sufficient redundant data exists to provide a usable pool, the pool appears in the DEGRADED state. For example:
# zpool import pool: tank id: 11809215114195894163 state: DEGRADED status: One or more devices are missing from the system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://www.sun.com/msg/ZFS-8000-2Q config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t0d0 UNAVAIL 0 0 0 cannot open c1t3d0 ONLINE 0 0 0 |
# zpool import pool: dozer id: 9784486589352144634 state: FAULTED action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: raidz1-0 FAULTED c1t0d0 ONLINE c1t1d0 FAULTED c1t2d0 ONLINE c1t3d0 FAULTED |
# zpool import pool: dozer id: 9784486589352144634 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: dozer FAULTED missing device raidz1-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE c1t2d0 ONLINE c1t3d0 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. |
Importing ZFS Storage Pools From Alternate Directories
By default, the zpool import command only searches devices within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the -d option to search alternate directories. For example:# zpool create dozer mirror /file/a /file/b # zpool export dozer # zpool import -d /file pool: dozer id: 7318163511366751416 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE mirror-0 ONLINE /file/a ONLINE /file/b ONLINE # zpool import -d /file dozer |
Importing ZFS Storage Pools
After a pool has been identified for import, you can import it by specifying the name of the pool or its numeric identifier as an argument to the zpool import command. For example:# zpool import tank |
# zpool import pool: dozer id: 2704475622193776801 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE c1t9d0 ONLINE pool: dozer id: 6223921996155991199 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE c1t8d0 ONLINE # zpool import dozer cannot import 'dozer': more than one matching pool import by numeric ID instead # zpool import 6223921996155991199 |
# zpool import dozer zeepool |
If the pool was not cleanly exported, ZFS requires the -f flag to prevent users from accidentally importing a pool that is still in use on another system. For example:
# zpool import dozer cannot import 'dozer': pool may be in use on another system use '-f' to import anyway # zpool import -f dozer |
Note – Do not attempt to import a pool that is active on one system to another system. ZFS is not a native cluster, distributed, or parallel file system and cannot provide concurrent access from multiple, different hosts.
Pools can also be imported under an alternate root by using the -R option. For more information on alternate root pools, see Using ZFS Alternate Root Pools.
Recovering Destroyed ZFS Storage Pools
You can use the zpool import -D command to recover a storage pool that has been destroyed. For example:# zpool destroy tank # zpool import -D pool: tank id: 5154272182900538157 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE |
state: ONLINE (DESTROYED) |
# zpool import -D tank # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE errors: No known data errors |
# zpool destroy dozer # zpool import -D pool: dozer id: 13643595538644303788 state: DEGRADED (DESTROYED) status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 UNAVAIL 0 35 1 cannot open c2t12d0 ONLINE 0 0 0 errors: No known data errors # zpool import -Df dozer # zpool status -x pool: dozer state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: scrub completed after 0h0m with 0 errors on Thu Jan 21 15:38:48 2010 config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 UNAVAIL 0 37 0 cannot open c2t12d0 ONLINE 0 0 0 errors: No known data errors # zpool online dozer c2t11d0 Bringing device c2t11d0 online # zpool status -x all pools are healthy |
Wednesday, July 25, 2012
Rman backup oracle
crosscheck backupset;
delete expired backupset;
crosscheck backup;
report obsolete;
delete obsolete;
list backup summary;
delete backupset 4151;
backup database;
Trong trường hợp backup ra SBT_TAPE sau đó chuyển qua kết nối backup qua DISK
Nếu muốn xóa các thông tin cũ của RMAN đã lưu thông tin trong SBT_TAPE ta dùng lệnh sau để force
allocate channel for maintenance device type sbt parms 'SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=/tmp)';
crosscheck backup;
delete NOPROMPT FORCE obsolete;
crosscheck backup;
delete expired backup;
crosscheck archivelog all;
delete expired archivelog all;
yes
DELETE ARCHIVELOG UNTIL TIME 'SYSDATE-7'
DELETE ARCHIVELOG UNTIL TIME 'SYSDATE-1';
Tuesday, July 24, 2012
Mirror disk Solaris
Tài liệu cấu hình
Nội dung:
-
Thực hiện mirror
đĩa boot cho máy v240 - Solaris
Các bước tiến hành:
Bài lab được thực hiện trên máy v240. Máy chạy Solaris
10, có 2 HDD dung lượng bằng nhau.
#1:
Sao lưu /etc/vfstab và /etc/system
#2: Kiểm tra thông tin các HDD
#3: Tạo 1 slice nhỏ trên đĩa boot, dung lượng khoảng 32Mb
để chứa volume databases
#4: Thực hiện copy VTOC từ đĩa boot sang đĩa mirror
#5: Tạo metadatabases trên
slice 32Mb của đĩa boot
#6: Mirror lần lượt cho các slice
#7: Sửa file /etc/vfstab
#8: Cấu hình dumpadm
#9: Reboot máy
#10: Attach submirror
#11: Cấu hình boot devices
#12:
Tháo đĩa boot & restart
#13: Kiểm tra hệ thống hoạt động bình thường
#1: Tiến hành sao lưu /etc/vfstab và /etc/system
# cp -p /etc/system
/etc/system.orig."date"
# cp -p /etc/vfstab
/etc/vfstab.orig."date"
Xoa swap trong truong hop boot disk ko con slide trong
de tao metadb
3.1) To list your swap, use:
swap -l
(It's good if you have more
than one slice configured as swap.)
3.2) Execute:
swap -d swap-name (
/dev/dsk/c?ct?d?s?)
NOINUSE_CHECK=1
export NOINUSE_CHECK
export NOINUSE_CHECK
Change your partition table
to incorporate a new slice by reducing the size or cylinder length of the swap
partition.
Sau khi resize xong có thể tạo lại swap
3.3) Execute:
swap -a swap-name (
/dev/dsk/c?t?d?s?)
#2: Kiểm tra thông tin HDD trước khi thực hiện
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1c,600000/scsi@2/sd@0,0
1. c1t2d0
/pci@1c,600000/scsi@2/sd@2,0
Thông tin đĩa boot (c1t2d0)
partition> p
Current partition table
(original):
Total disk cylinders
available: 14087 + 2 (reserved cylinders)
Part Tag
Flag Cylinders Size Blocks
0
root wm
0 - 4121 20.00GB (4122/0/0)
41945472
1
swap wu 4122 -
6182 10.00GB (2061/0/0)
20972736
2
backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3
var wm 6183 - 11335 25.00GB (5153/0/0)
52436928
4 unassigned wm
11336 - 14079 13.31GB (2744/0/0)
27922944
5 unassigned wm
14080 - 14086 34.78MB (7/0/0) 71232
6 unassigned wm
0 0 (0/0/0) 0
7 unassigned wm
0 0 (0/0/0) 0
Thông tin đĩa làm mirror (c1t0d0)
partition> p
Current partition table
(original):
Total disk sectors available:
143358320 + 16384 (reserved sectors)
Part Tag
Flag First Sector Size Last Sector
0
usr wm 256 68.36GB 143358320
1 unassigned wm 0 0 0
2 unassigned
wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8
reserved wm 143358321 8.00MB 143374704
#3: Trên đĩa boot, tạo 1 slice khoảng
32Mb để lưu volume databases (slice 5)
5 unassigned wm
14080 - 14086 34.78MB (7/0/0) 71232
Nếu chưa có slice là đĩa lưu metadb có thể resize đĩa swap lấy 500M làm đĩa lưu metadb
VD:
==> Resize lại slice 1 để dùng slice 5 khoảng tối thiểu 500M lưu metadb
partition> 1
Part Tag Flag Cylinders Size Blocks
1 swap wu 0 - 3297 16.00GB (3298/0/0) 33560448
Enter partition id tag[swap]:
Enter partition permission flags[wu]:
Enter new starting cyl[0]:
Enter partition size[33560448b, 3298c, 3297e, 16386.94mb, 16.00gb]: 15.00gb
partition> print
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 3298 - 14086 52.35GB (10789/0/0) 109788864
1 swap wu 0 - 3091 15.00GB (3092/0/0) 31464192
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 3092 - 3297 1023.56MB (206/0/0) 2096256
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Chú ý: Tạo slice 5 bắt đầu cyl[0]: là số 3091+1= 3092
sau khi xong phải tạo label cho disk nó mới lưu lại
partition> label
Ready to label disk, continue? y
partition> q
format> q
partition> 5
Part Tag Flag Cylinders Size Blocks
5 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]: metadb
`metadb' not expected.
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 3092
Enter partition size[0b, 0c, 3092e, 0.00mb, 0.00gb]: 3297e
partition> print
Current partition table (unnamed):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 3298 - 14086 52.35GB (10789/0/0) 109788864
1 swap wu 0 - 3091 15.00GB (3092/0/0) 31464192
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 3092 - 3297 1023.56MB (206/0/0) 2096256
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
partition> label
Ready to label disk, continue? y
partition> q
#4: Copy VTOC (Volume tables of
contents) từ đĩa boot (c1t2d0) sang đĩa làm mirror (c1t0d0)
# prtvtoc /dev/rdsk/c1t2d0s2
| fmthard -s - /dev/rdsk/c1t0d0s2
Kiểm
tra lại VTOC của đĩa mirror
partition> p
Current partition table
(original):
Total disk cylinders
available: 14087 + 2 (reserved cylinders)
Part Tag
Flag Cylinders Size Blocks
0
root wm 0 -
4121 20.00GB (4122/0/0)
41945472
1
swap wu 4122 -
6182 10.00GB (2061/0/0)
20972736
2
backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3
var wm 6183 - 11335 25.00GB (5153/0/0)
52436928
4 unassigned wm
11336 - 14079 13.31GB (2744/0/0)
27922944
5 unassigned wm
14080 - 14086 34.78MB (7/0/0) 71232
6 unassigned wu
0 0 (0/0/0) 0
7 unassigned wu
0 0 (0/0/0) 0
#5:
Tạo
metadatabases trên slice 5 của đĩa boot và đĩa mirror (slice 5 vừa tạo ở trên)
# metadb -f -a -c3 c1t2d0s5
# metadb -a -c3 c1t0d0s5
#6: Tạo mirror cho mỗi slice trên đĩa, đầu
tiên với slice 0
# metainit -f d10 1 1
c1t2d0s0
d10: Concat/Stripe is setup
# metainit d20 1 1 c1t0d0s0
d20: Concat/Stripe is setup
# metainit d0 -m d10
d0: Mirror is setup
# metaroot d0
# lockfs -fa
Mirror
cho slice 1
# metainit -f d11 1 1
c1t2d0s1
d11: Concat/Stripe is setup
# metainit d21 1 1 c1t0d0s1
d21: Concat/Stripe is setup
# metainit d1 -m d11
d1: Mirror is setup
Chú ý: Nếu slice 3,4,6 là rỗng không cần tạo bước dưới đây, và slice là tổng dung lượng của disk nên không cần tạo slice 2
Mirror
cho slice 3
# metainit -f d13 1 1
c1t2d0s3
d13: Concat/Stripe is setup
# metainit d23 1 1 c1t0d0s3
d23: Concat/Stripe is setup
# metainit d3 -m d13
d3: Mirror is setup
Mirror
cho slice 4
# metainit -f d14 1 1
c1t2d0s4
d14: Concat/Stripe is setup
# metainit d24 1 1 c1t0d0s4
d24: Concat/Stripe is setup
# metainit d4 -m d14
d4: Mirror is setup
Chú ý trong trường hợp tạo sai có thể xóa nó
bash-3.2# metaclear d0
d0: Mirror is cleared
bash-3.2# metaclear d10
d10: Concat/Stripe is cleared
bash-3.2# metaclear d20
d20: Concat/Stripe is cleared
#7: Sửa file /etc/vfstab
Chú ý rào các dòng này lại
#/dev/dsk/c1t2d0s1
#/dev/dsk/c1t2d0s3
#/dev/dsk/c1t2d0s4
bash-3.00# cat /etc/vfstab
#device device mount FS fsck
mount mount
#to mount to fsck point type pass
at boot options
#
fd -
/dev/fd fd - no
-
/proc -
/proc proc -
no -
#/dev/dsk/c1t2d0s1 -
- swap -
no -
/dev/md/dsk/d1 -
- swap -
no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs
1 no -
#/dev/dsk/c1t2d0s3 /dev/rdsk/c1t2d0s3 /var
ufs 1 no
-
/dev/md/dsk/d3 /dev/md/rdsk/d3 /var ufs
1 no -
#/dev/dsk/c1t2d0s4 /dev/rdsk/c1t2d0s4 /opt
ufs 2 yes
-
/dev/md/dsk/d4 /dev/md/rdsk/d4 /opt ufs
2 yes -
/devices -
/devices devfs -
no -
sharefs - /etc/dfs/sharetab sharefs - no
-
ctfs -
/system/contract ctfs -
no -
objfs -
/system/object objfs -
no -
swap -
/tmp tmpfs -
yes -
#8: Cấu hình dumpadm
bash-3.00# dumpadm
Dump content: kernel pages
Dump device: /dev/dsk/c1t2d0s1 (swap)
Savecore directory:
/var/crash/net1
Savecore enabled: yes
bash-3.00# dumpadm -d
/dev/md/dsk/d1
Dump content: kernel pages
Dump device: /dev/md/dsk/d1 (dedicated)
Savecore directory:
/var/crash/net1
Savecore enabled: yes
bash-3.00# vi /etc/system
* Begin MDD root info (do not
edit)
rootdev:/pseudo/md@0:0,0,blk
set
md:mirrored_root_flag=1
* End MDD root info (do not
edit)
#9: Reboot system
# init 6
#10: Attach submirror
# metattach d0 d20
d0: submirror d20 is attached
# metattach d1 d21
d1: submirror d21 is attached
# metattach d3 d23
d3: submirror d23 is attached
# metattach d4 d24
d4: submirror d24 is attached
Kiểm tra quá trình sync
# metastat | grep progress
Resync in progress: 3 % done
Resync in progress: 3 % done
Resync in progress: 12 % done
Resync in progress: 11 % done
Chú ý: sau khi sync xong mới được phép rút đĩa ra test
#11: Cấu hình boot devices
Xem
path của boot devices
# ls -l /dev/dsk/c1t2d0s0
/dev/dsk/c1t0d0s0
lrwxrwxrwx 1 root
root 43 Apr 27 17:01
/dev/dsk/c1t0d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a
lrwxrwxrwx 1 root
root 43 Apr 27 17:01
/dev/dsk/c1t2d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@2,0:a
Tạo
alias cho boot devices – hệ thống sẽ dự phòng boot từ mirror
# eeprom
"nvramrc=devalias rootdisk /pci@1c,600000/scsi@2/disk@2,0 dev
alias rootmirror
/pci@1c,600000/scsi@2/disk@0,0"
# eeprom
"use-nvramrc?=true"
# eeprom
boot-device="rootdisk rootmirror net"
#12: Tháo đĩa boot & restart
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1c,600000/scsi@2/sd@0,0
Specify disk (enter its
number):
#13: Kiểm tra hệ thống hoạt động bình thường
----------------------------------------------------------------
Ref Materials:
-
Solaris
Volume Manager Administration Guide
-
Intermediate
SystemAdministration for the Solaris™ 10
====================================
sau khi insert dia moi vao
====================================
root@solaris:~ # metastat d50
d50:
Mirror
Submirror
0: d51
State:
Needs maintenance
Submirror
1: d52
State:
Okay
Pass: 1
Read
option: roundrobin (default)
Write
option: parallel (default)
Size:
32776896 blocks (15 GB)
d51:
Submirror of d50
State:
Needs maintenance
Invoke:
metareplace d50 c0t0d0s5
Size:
32776896 blocks (15 GB)
Stripe 0:
Device
Start Block Dbase State Reloc Hot Spare
c0t0d0s5 0
No Maintenance Yes
d52:
Submirror of d50
State:
Okay
Size:
32776896 blocks (15 GB)
Stripe 0:
Device
Start Block Dbase State Reloc Hot Spare
c0t1d0s5 0
No Okay Yes
Device
Relocation Information:
Device
Reloc Device ID
c0t0d0 Yes
id1,sd@n500000e013633ce0
c0t1d0 Yes
id1,sd@n500000e013633810
=========================================================
resync dia (thuc hien lai, cho tat ca cac slide
(d0,d1.,d5))
=========================================================
root@solaris:~ # metareplace -e d50 /dev/dsk/c0t0d0s5
d50:
device c0t0d0s5 is enabled
root@solaris:~ # metastat d50
d50:
Mirror
Submirror
0: d51
State:
Resyncing
Submirror
1: d52
State:
Okay
Resync in
progress: 0 % done
Pass: 1
Read
option: roundrobin (default)
Write
option: parallel (default)
Size: 32776896
blocks (15 GB)
d51:
Submirror of d50
State:
Resyncing
Size:
32776896 blocks (15 GB)
Stripe 0:
Device
Start Block Dbase State Reloc Hot Spare
c0t0d0s5 0
No Resyncing Yes
d52:
Submirror of d50
State:
Okay
Size:
32776896 blocks (15 GB)
Stripe 0:
Device
Start Block Dbase State Reloc Hot Spare
c0t1d0s5 0
No Okay Yes
Device
Relocation Information:
Device
Reloc Device ID
c0t0d0 Yes
id1,sd@n500000e013633ce0
c0t1d0 Yes
id1,sd@n500000e013633810
-------------------------
Neu xay ra loi sau khi tao mirror,
Vao ok mode boot cdrom vao sigle mode,
Edit lai /etc/vfstab trong truong hop edit sai.
--------------
Neu trong truong hop khong sua duoc mirror
Vao ok mode boot cdrom vao single mode
Mkdir a
Mount /dev/dsk/c1t0d0s0 /a
Cp etc/vfstab etc/vfstab.orig
Cp etc/system etc/system.orig
Dumpadm –d /dev/dsk/c1t0d0s1
Reboot server.
Sau khi reboot,
Dung lenh
metadb –d /dev/dsk/c1t0d0s5
metadb –d /dev/dsk/c1t1d0s5.
Reboot lai server.
Subscribe to:
Posts (Atom)