zpool(1m) - configures ZFS storage pools



  • System Administration Commands                                       zpool(1M)
    
    
    
    NAME
           zpool - configures ZFS storage pools
    
    SYNOPSIS
           zpool [-?]
    
    
           zpool help command | help | property property-name
    
    
           zpool help -l properties
    
    
           zpool add [-f] [-n [-l]] pool vdev ...
    
    
           zpool attach [-f] pool device new_device
    
    
           zpool clear [-nF [-f]] pool [device]
    
    
           zpool create [-f] [-n [-l]] [-B] [-N] [-o property=value] ...
                [-O file-system-property=value] ... [-m mountpoint]
                [-R root] [-t tmppool] pool vdev ...
    
    
           zpool destroy [-f] pool
    
    
           zpool detach pool device
    
    
           zpool export [-f] pool ...
    
    
           zpool get all | property[,...] pool ...
    
    
           zpool history [-il] [pool] ...
    
    
           zpool import [-d path ...] [-D] [-l]
    
    
           zpool import [-d path ... | -c cachefile][-F [-n <pool | id>
    
    
           zpool import [-o mntopts] [-o property=value] ... [-d path ... |
                -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
                pool | id [newpool]
    
    
           zpool import [-o mntopts] [-o property=value] ... [-d path ... |
                -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]]
                [-t tmppool] pool | id [newpool]
    
    
           zpool iostat [-T d|u ] [-v [-l]] [pool] ... [interval[count]]
    
    
           zpool list [-H] [-o property[,...]] [-T d|u ] [pool] ... [interval[count]]
    
    
           zpool offline [-t] pool device ...
    
    
           zpool online [-e] pool device ...
    
    
           zpool remove pool device ...
    
    
           zpool replace [-f] pool device [new_device]
    
    
           zpool scrub [-s] pool ...
    
    
           zpool set property=value pool
    
    
           zpool split [-n [-l]] [-R altroot]  [-o mntopts] [-o property=value] pool
                newpool [device ...]
    
    
           zpool status [-l] [-v] [-x] [-T d|u ] [pool] ... [interval[count]]
    
    
           zpool upgrade
    
    
           zpool upgrade -v
    
    
           zpool upgrade [-V version] -a | pool ...
    
    
           zpool monitor -t provider [-T d|u] [[-p] -o field[,. . .]] [pool] . . .
                    [interval [count]]
    
    
    DESCRIPTION
           The  zpool  command  configures  ZFS storage pools. A storage pool is a
           collection of devices that provides physical storage and data  replica-
           tion for ZFS datasets.
    
    
           All  datasets  within  a storage pool share the same space. See zfs(1M)
           for information on managing datasets.
    
       Virtual Devices (vdevs)
           A virtual device describes a single device or a collection  of  devices
           organized  according  to certain performance and fault characteristics.
           The following virtual devices are supported:
    
           disk
    
               A block device, typically located under /dev/dsk. ZFS can use indi-
               vidual  slices or partitions, though the recommended mode of opera-
               tion is to use whole disks. A disk can be specified by a full path,
               or  it  can  be  a shorthand name (the relative portion of the path
               under /dev/dsk). A whole disk can  be  specified  by  omitting  the
               slice  or  partition designation. Alternatively, whole disks can be
               specified using the /dev/chassis/.../disk path that  describes  the
               disk's current location. When given a whole disk, ZFS automatically
               labels the disk, if necessary.
    
    
           file
    
               A regular file. The use of files as a  backing  store  is  strongly
               discouraged. It is designed primarily for experimental purposes, as
               the fault tolerance of a file is only as good as the file system of
               which it is a part. A file must be specified by a full path.
    
    
           mirror
    
               A mirror of two or more devices. Data is replicated in an identical
               fashion across all components of a mirror. A mirror with N disks of
               size  X  can  hold  X bytes and can withstand (N-1) devices failing
               before data integrity is compromised.
    
    
           raidz
           raidz1
           raidz2
           raidz3
    
               A variation on RAID-5 that allows for better distribution of parity
               and  eliminates  the  "RAID-5 write hole" (in which data and parity
               become inconsistent after a power loss). Data and parity is striped
               across all disks within a raidz group.
    
               A raidz group can have single-, double- , or triple parity, meaning
               that the raidz group can  sustain  one,  two,  or  three  failures,
               respectively,  without losing any data. The raidz1 vdev type speci-
               fies a single-parity raidz group; the raidz2 vdev type specifies  a
               double-parity  raidz  group;  and  the raidz3 vdev type specifies a
               triple-parity raidz group. The raidz vdev  type  is  an  alias  for
               raidz1.
    
               A  raidz  group with N disks of size X with P parity disks can hold
               approximately (N-P)*X bytes and can withstand P  device(s)  failing
               before data integrity is compromised. The minimum number of devices
               in a raidz group is one more than the number of parity  disks.  The
               recommended number is between 3 and 9 to help increase performance.
    
    
           spare
    
               A special pseudo-vdev which keeps track of available hot spares for
               a pool. For more information, see the "Hot Spares" section.
    
    
           log
    
               A separate-intent log device. If more than one log device is speci-
               fied,  then  writes  are load-balanced between devices. Log devices
               can be mirrored. However, raidz vdev types are  not  supported  for
               the intent log. For more information, see the "Intent Log" section.
    
    
           cache
    
               A  device used to cache storage pool data. A cache device cannot be
               configured as a mirror or raidz group. For  more  information,  see
               the "Cache Devices" section.
    
    
    
           Virtual  devices  cannot be nested, so a mirror or raidz virtual device
           can only contain files or disks. Mirrors of mirrors (or other  combina-
           tions) are not allowed.
    
    
           A pool can have any number of virtual devices at the top of the config-
           uration (known as root vdevs). Data is dynamically  distributed  across
           all  top-level  devices  to  balance data among devices. As new virtual
           devices are added, ZFS automatically places data on the newly available
           devices.
    
    
           Virtual  devices are specified one at a time on the command line, sepa-
           rated by whitespace. The keywords mirror and raidz are used to  distin-
           guish where a group ends and another begins. For example, the following
           creates two root vdevs, each a mirror of two disks:
    
             # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
    
    
    
    
           Alternatively, the following command could be used:
    
             # zpool create tank \
             mirror \
                 /dev/chassis/RACK29.U01-04/DISK_00/disk \
                 /dev/chassis/RACK29.U05-08/DISK_00/disk \
             mirror \
                 /dev/chassis/RACK29.U01-04/DISK_01/disk \
                 /dev/chassis/RACK29.U05-08/DISK_01/disk
    
    
    
       Pool or Device Failure and Recovery
           ZFS supports a rich set of mechanisms for handling device  failure  and
           data corruption. All metadata and data is checksummed, and ZFS automat-
           ically repairs bad data from a good copy when corruption is detected.
    
    
           In order to take advantage of these features, a pool must make  use  of
           some  form  of redundancy, using either mirrored or raidz groups. While
           ZFS supports running in a non-redundant configuration, where each  root
           vdev  is  simply a disk or file, this is strongly discouraged. A single
           case of bit corruption can render some or all of your data unavailable.
    
    
           A pool's health status is described by one of four states:
    
           DEGRADED
    
               A pool with one or more failed  devices,  but  the  data  is  still
               available due to a redundant configuration.
    
    
           ONLINE
    
               A pool that has all devices operating normally.
    
    
           SUSPENDED
    
               A  pool  that  is waiting for device connectivity to be restored. A
               suspended pool remains in the wait state until the device issue  is
               resolved.
    
    
           UNAVAIL
    
               A  pool with corrupted metadata, or one or more unavailable devices
               and insufficient replicas to continue functioning.
    
    
    
           The health of the top-level vdev, such as mirror or  raidz  device,  is
           potentially impacted by the state of its associated vdevs, or component
           devices. A top-level vdev or component device is in one of the  follow-
           ing states:
    
           DEGRADED
    
               One or more top-level vdevs is in the degraded state because one or
               more component devices are offline. Sufficient  replicas  exist  to
               continue functioning.
    
               One  or more component devices is in the degraded or faulted state,
               but sufficient replicas exist to continue functioning. The underly-
               ing conditions are as follows:
    
                   o      The  number of checksum errors exceeds acceptable levels
                          and the device is degraded as an indication  that  some-
                          thing  may  be wrong. ZFS continues to use the device as
                          necessary.
    
                   o      The number of I/O errors exceeds acceptable levels.  The
                          device  could not be marked as faulted because there are
                          insufficient replicas to continue functioning.
    
    
           OFFLINE
    
               The device was explicitly taken offline by the zpool  offline  com-
               mand.
    
    
           ONLINE
    
               The device is online and functioning.
    
    
           REMOVED
    
               The  device  was  physically  removed while the system was running.
               Device removal detection is hardware-dependent and may not be  sup-
               ported on all platforms.
    
    
           UNAVAIL
    
               The device could not be opened. If a pool is imported when a device
               was unavailable, then the device will be  identified  by  a  unique
               identifier  instead of its path since the path was never correct in
               the first place.
    
    
    
           If a device is removed and later reattached to the system, ZFS attempts
           to  put  the  device  online  automatically. Device attach detection is
           hardware-dependent and might not be supported on all platforms.
    
       Hot Spares
           ZFS allows devices to be associated with pools  as  hot  spares.  These
           devices  are  not  actively used in the pool, but when an active device
           fails, it is automatically replaced by a hot spare. To  create  a  pool
           with  hot  spares, specify a spare vdev with any number of devices. For
           example,
    
             # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
    
    
    
    
           Spares can be added with the zpool add command  and  removed  with  the
           zpool  remove  command.  Once  a  spare replacement is initiated, a new
           spare vdev is created within the configuration that will  remain  there
           until  the  original  device  is replaced. At this point, the hot spare
           becomes available again if another device fails.
    
    
           An in-progress spare replacement can be cancelled by detaching the  hot
           spare.  If  the original faulted device is detached, then the hot spare
           assumes its place in the configuration, and is removed from  the  spare
           list of all active pools.
    
    
           If  the  original  failed  device  is physically replaced, brought back
           online, or the errors are cleared, either through an FMA  event  or  by
           using  the  zpool  online or zpool clear commands, and the state of the
           original device becomes healthy, the INUSE  spare  device  will  become
           AVAIL again.
    
    
           Spares cannot replace log devices.
    
       Intent Log
           The  ZFS  Intent Log (ZIL) satisfies POSIX requirements for synchronous
           transactions. For instance, databases often require their  transactions
           to  be on stable storage devices when returning from a system call. NFS
           and other applications can also use fsync() to ensure  data  stability.
           By  default,  the  intent  log is allocated from blocks within the main
           pool. However, it might be possible to  get  better  performance  using
           separate  intent  log  devices  such  as NVRAM or a dedicated disk. For
           example:
    
             # zpool create pool c0d0 c1d0 log c2d0
    
    
    
    
           Multiple log devices can also be specified, and they can  be  mirrored.
           See  the  EXAMPLES  section  for  an  example of mirroring multiple log
           devices.
    
    
           Log devices can be added, replaced, attached, detached,  and  imported,
           and  exported  as  part of the larger pool. Mirrored log devices can be
           removed by specifying the top-level mirror for the log.
    
       Cache Devices
           Devices can be added to a storage pool as cache devices. These  devices
           provide  an  additional  layer of caching between main memory and disk.
           For read-heavy workloads, where the working set  size  is  much  larger
           than  what can be cached in main memory, using cache devices allow much
           more of this working set to be served from  low  latency  media.  Using
           cache  devices provides the greatest performance improvement for random
           read-workloads of mostly static content.
    
    
           To create a pool with cache devices, specify a cache vdev with any num-
           ber of devices. For example:
    
             # zpool create pool c0d0 c1d0 cache c2d0 c3d0
    
    
    
    
           Cache devices cannot be mirrored or part of a raidz configuration. If a
           read error is encountered on a cache device, that read I/O is  reissued
           to  the original storage pool device, which might be part of a mirrored
           or raidz configuration.
    
    
           The content of the cache devices is considered volatile, as is the case
           with other system caches.
    
       Processes
           Each imported pool has an associated process, named zpool-poolname. The
           threads in this process are the pool's I/O  processing  threads,  which
           handle the compression, checksumming, and other tasks for all I/O asso-
           ciated with the pool. This process exists to provides  visibility  into
           the  CPU  utilization  of  the system's storage pools. The existence of
           this process is an unstable interface.
    
       Properties
           Each pool has several properties associated with  it.  Some  properties
           are  read-only  statistics while others are configurable and change the
           behavior of the pool. The following are read-only properties:
    
           allocated
    
               Amount of storage space within the pool that  has  been  physically
               allocated.  This  property can also be referred to by its shortened
               column name, alloc.
    
    
           capacity
    
               Percentage of pool space used. This property can also  be  referred
               to by its shortened column name, cap.
    
    
           dedupratio
    
               The deduplication ratio specified for a pool, expressed as a multi-
               plier. This value is expressed as  a  single  decimal  number.  For
               example,  a  dedupratio  value of 1.76 indicates that 1.76 units of
               data were stored but only 1 unit of disk space  was  actually  con-
               sumed.  This property can also be referred to by its shortened col-
               umn name, dedup.
    
               Deduplication can be enabled as follows:
    
                 # zfs set dedup=on pool/dataset
    
    
               The default value is off.
    
               See zfs(1M) for a description of the deduplication feature.
    
    
           free
    
               Number of blocks within the pool that are not allocated.
    
    
           guid
    
               A unique identifier for the pool.
    
    
           health
    
               The current health of the pool. Health  can  be  ONLINE,  DEGRADED,
               UNAVAIL, or SUSPENDED.
    
    
           size
    
               Total size of the storage pool.
    
    
    
           These  space usage properties report actual physical space available to
           the storage pool. The physical space can be different  from  the  total
           amount  of  space  that  any  contained  datasets can actually use. The
           amount of space used in a raidz configuration depends on the character-
           istics  of the data being written. In addition, ZFS reserves some space
           for internal accounting that the zfs(1M) command  takes  into  account,
           but  the  zpool  command  does  not. For non-full pools of a reasonable
           size, these effects should be invisible. For small pools, or pools that
           are close to being completely full, these discrepancies may become more
           noticeable.
    
    
           The following property can be set at creation time and import time:
    
           altroot
    
               Alternate root directory. If set, this directory  is  prepended  to
               any  mount  points within the pool. This can be used when examining
               an unknown pool where the mount points cannot be trusted, or in  an
               alternate  boot environment, where the typical paths are not valid.
               altroot is not a persistent property. It is valid  only  while  the
               system  is  up.  Setting  altroot defaults to using cachefile=none,
               though this may be overridden using an explicit setting.
    
    
    
           The following property can be set at import time:
    
           readonly=on | off
    
               Controls whether the pool can be modified. When enabled,  any  syn-
               chronous  data that exists only in the intent log is not accessible
               until the pool is imported in read-write mode.
    
               Importing a pool in read-only mode has the following limitations:
    
                   o      Attempts to set additional pool  properties  during  the
                          import are ignored.
    
                   o      All  file  system  mounts  are  converted to include the
                          read-only (ro) mount option.
               A pool that has been imported in read-only mode can be restored  to
               read-write mode by exporting and importing the pool.
    
    
    
           The  following  properties can be set at creation time and import time,
           and later changed with the zpool set command:
    
           autoexpand=on | off
    
               Controls automatic pool expansion when the underlying LUN is grown.
               If set to on, the pool will be resized according to the size of the
               expanded device. If the device is part of a mirror  or  raidz  then
               all  devices within that mirror/raidz group must be expanded before
               the new space is made available to the pool. The  default  behavior
               is off. This property can also be referred to by its shortened col-
               umn name, expand.
    
    
           autoreplace=on | off
    
               Controls automatic  device  replacement.  If  set  to  off,  device
               replacement  must  be  initiated  by the administrator by using the
               zpool replace command. If set to on, any new device, found  in  the
               same  physical location as a device that previously belonged to the
               pool, is automatically formatted and replaced. The default behavior
               is off. This property can also be referred to by its shortened col-
               umn name, replace.
    
    
           bootfs=pool/dataset
    
               Identifies the default bootable dataset for  the  root  pool.  This
               property  is  expected  to  be  set  mainly by the installation and
               upgrade programs.
    
    
           cachefile=path | none
    
               Controls the location of where the pool  configuration  is  cached.
               Discovering  all  pools on system startup requires a cached copy of
               the configuration data that is stored on the root file system.  All
               pools  in  this  cache  are  automatically imported when the system
               boots. Some environments, such as install and clustering,  need  to
               cache  this  information  in a different location so that pools are
               not automatically imported. Setting this property caches  the  pool
               configuration  in  a  different location that can later be imported
               with zpool import -c. Setting it to the special value none  creates
               a  temporary  pool  that  is never cached, and the special value ''
               (empty string) uses the default location.
    
               Multiple pools can share the same cache file.  Because  the  kernel
               destroys  and recreates this file when pools are added and removed,
               care should be taken when attempting to access this file. When  the
               last  pool  using a cachefile is exported or destroyed, the file is
               removed.
    
    
           dedupditto=number
    
               Sets a threshold for number of copies. If the reference count for a
               deduplicated block goes above this threshold, another ditto copy of
               the block is stored automatically. The default value is 0.
    
    
           delegation=on | off
    
               Controls whether a non-privileged user is granted access  based  on
               the  dataset  permissions defined on the dataset. The default value
               is on. See zfs(1M) for more information on ZFS  delegated  adminis-
               tration.
    
    
           failmode=wait | continue | panic
    
               Controls  the  system  behavior  in  the event of catastrophic pool
               failure. This condition is typically a result of a loss of  connec-
               tivity  to  the  underlying  storage  device(s) or a failure of all
               devices within the pool. The behavior of such an  event  is  deter-
               mined as follows:
    
               wait
    
                   Blocks all I/O access to the pool until the device connectivity
                   is recovered and the errors are cleared. A pool remains in  the
                   wait  state  until  the  device  issue is resolved. This is the
                   default behavior.
    
    
               continue
    
                   Returns EIO to any new write I/O requests but allows  reads  to
                   any  of  the remaining healthy devices. Any write requests that
                   have yet to be committed to disk would be blocked.
    
    
               panic
    
                   Prints out a message to the  console  and  generates  a  system
                   crash dump.
    
    
    
           listshares=on | off
    
               Controls  whether  share information in this pool is displayed with
               the zfs list command. The default value is off.
    
    
           listsnapshots=on | off
    
               Controls whether information about snapshots associated  with  this
               pool  is  output  when  zfs  list is run without the -t option. The
               default value is off. This property can also be referred to by  its
               shortened column name, listsnaps.
    
    
           version=version
    
               The current on-disk version of the pool. This can be increased, but
               never decreased. The preferred method of updating pools is with the
               zpool upgrade command, though this property can be used when a spe-
               cific version is needed for backwards compatibility. This  property
               can  be  any  number  between 1 and the current version reported by
               zpool upgrade -v.
    
    
       Subcommands
           All subcommands that modify state are logged persistently to  the  pool
           in their original form.
    
    
           The  zpool  command  provides subcommands to create and destroy storage
           pools, add capacity to storage pools, and provide information about the
           storage pools. The following subcommands are supported:
    
           zpool -?
    
               Displays a help message.
    
    
           zpool help command | help | property property-name
    
               Displays  zpool  command usage. You can display help for a specific
               command or property. If you display help for a specific command  or
               property,  the command syntax or available property values are dis-
               played. Using zpool help without any arguments displays a  complete
               list of zpool commands.
    
    
           zpool help -l properties
    
               Displays zpool property information, including whether the property
               value is editable and their possible values. If  you  display  help
               for  a specific subcommand or property, the command syntax or prop-
               erty value is displayed. Using zpool  help  without  any  arguments
               displays a complete list of zpool subcommands.
    
    
           zpool add [-f] [-n [-l]]pool vdev ...
    
               Adds  the  specified  virtual  devices  to the given pool. The vdev
               specification is described in the "Virtual  Devices"  section.  The
               behavior  of  the  -f  option,  and the device checks performed are
               described in the zpool create subcommand.
    
               -f
    
                   Forces use of vdevs, even if they appear in use  or  specify  a
                   conflicting  replication level. Not all devices can be overrid-
                   den in this manner.
    
    
               -n
    
                   Displays the configuration that would be used without  actually
                   adding  the  vdevs. The actual pool creation can still fail due
                   to insufficient privileges or device sharing.
    
    
               -l
    
                   If possible, have  -n  display  the  configuration  in  current
                   /dev/chassis location form.
    
               Do  not  add a disk that is currently configured as a quorum device
               to a ZFS storage pool. After a disk is in the pool, that  disk  can
               then be configured as a quorum device.
    
    
           zpool attach [-f] pool device new_device
    
               Attaches  new_device  to  an  existing  zpool  device. The existing
               device cannot be part of a raidz configuration. If  device  is  not
               currently  part  of  a mirrored configuration, device automatically
               transforms into a two-way  mirror  of  device  and  new_device.  If
               device  is part of a two-way mirror, attaching new_device creates a
               three-way mirror, and so on. In either case, new_device  begins  to
               resilver immediately.
    
               -f
    
                   Forces use of new_device, even if its appears to be in use. Not
                   all devices can be overridden in this manner.
    
    
    
           zpool clear [-nF [-f]] pool [device] ...
    
               Clears device errors in a pool. If no arguments are specified,  all
               device  errors  within the pool are cleared. If one or more devices
               is specified, only  those  errors  associated  with  the  specified
               device or devices are cleared.
    
               -F
    
                   Initiates  recovery  mode  for  an unopenable pool. Attempts to
                   discard the last few transactions in the pool to return  it  to
                   an  openable  state.  Not all damaged pools can be recovered by
                   using this option. If successful, the data from  the  discarded
                   transactions is irretrievably lost.
    
    
               -n
    
                   Used  in combination with the -F flag. Check whether discarding
                   transactions would make the pool openable, but do not  actually
                   discard any transactions.
    
    
               -f
    
                   This  is a special pool recovery option that can be used if the
                   fmadm acquit or fmadm repair commands fail to  clear  a  pool's
                   faults.  If  the system reboots, FMA replays the pool faults so
                   you will need to resolve the  FMA  faults  after  the  pool  is
                   recovered.
    
    
    
           zpool create [-f] [-n [-l]] [-B] [-N] [-o property=value] ... [-O file-
           system-property=value] ... [-m mountpoint] [-R root] [-t tmppool] pool
           vdev ...
    
               Creates a new storage pool containing the virtual devices specified
               on the command line. The pool name must begin with  a  letter,  and
               can  contain  alphanumeric  characters,  as well as underscore (_),
               dash (-), colon (:), space ( ),  and period  (.).  The  pool  names
               mirror,  raidz, spare, and log are reserved, as are names beginning
               with the pattern c[0-9]. The vdev specification is described in the
               "Virtual Devices" section.
    
               The  command  verifies that each device specified is accessible and
               not currently in use by another subsystem.  There  are  some  uses,
               such as being currently mounted, or specified as the dedicated dump
               device, that prevents a device from ever being used by  ZFS.  Other
               uses, such as having a preexisting UFS file system, can be overrid-
               den with the -f option.
    
               The command also checks that the replication strategy for the  pool
               is  consistent.  An  attempt to combine redundant and non-redundant
               storage in a single pool, or to mix disks and files, results in  an
               error  unless -f is specified. The use of differently sized devices
               within a single raidz or mirror group is also flagged as  an  error
               unless -f is specified.
    
               Unless  the  -R  option  is  specified,  the default mount point is
               /pool. The mount point must not exist or must be empty, or else the
               root  dataset cannot be mounted. This can be overridden with the -m
               option.
    
               -B
    
                   When operating on a whole disk device, creates the boot  parti-
                   tion,  if  one is required to boot from EFI (GPT) labeled disks
                   on the platform. The -B option has no effect  on  devices  that
                   are not whole disks.
    
    
               -f
    
                   Forces  use  of  vdevs, even if they appear in use or specify a
                   conflicting replication level. Not all devices can be  overrid-
                   den in this manner.
    
    
               -l
    
                   If  possible,  have  -n  display  the  configuration in current
                   /dev/chassis location form.
    
    
               -n
    
                   Displays the configuration that would be used without  actually
                   creating  the pool. The actual pool creation can still fail due
                   to insufficient privileges or if a device is currently in use.
    
    
               -N
    
                   Creates the pool without mounting or sharing the newly  created
                   root file system of the pool.
    
    
               -o property=value [-o property=value] ...
    
                   Sets  the  given  pool properties. See the "Properties" section
                   for a list of valid properties that can be set.
    
    
               -O file-system-property=value
               [-O file-system-property=value] ...
    
                   Sets the given properties for the pool's top-level file system.
                   See  the  "Properties"  section  of zfs(1M) for a list of valid
                   properties that can be set.
    
    
               -R root
    
                   Equivalent to -o cachefile=none,altroot=root.
    
    
               -m mountpoint
    
                   Sets the mount point for the pool's top-level file system.  The
                   default  mount  point  is  /pool  or altroot/pool if altroot is
                   specified. The mount point must be an absolute path, legacy, or
                   none.  For  more  information  on  dataset  mount  points,  see
                   zfs(1M).
    
    
               -t tmppool
    
                   Use the specified temporary pool name for the  initial  import.
                   Implies -o cachefile=none.
    
    
    
           zpool destroy [-f] pool
    
               Destroys the given pool, freeing up any devices for other use. This
               command tries to unmount any active datasets before destroying  the
               pool.
    
               -f
    
                   Forces  any  active  datasets  contained  within the pool to be
                   unmounted.
    
    
    
           zpool detach pool device
    
               Detaches a device or a spare from a mirrored storage pool. A  spare
               can  also  be  detached  from  a RAID-Z storage pool if an existing
               device  was physically replaced. Or, you  can  detach  an  existing
               device in a  RAID-Z storage pool if it was replaced by a spare. The
               operation is  refused if there are no other valid replicas  of  the
               data.
    
    
           zpool export [-f] pool ...
    
               Exports  the given pools from the system. All devices are marked as
               exported, but are still considered in use by other subsystems.  The
               devices can be moved between systems (even those of different endi-
               anness) and imported as long as a sufficient number of devices  are
               present.
    
               Before  exporting  the  pool,  all  datasets  within  the  pool are
               unmounted.
    
               For pools to be portable, you must give  the  zpool  command  whole
               disks, not just slices, so that ZFS can label the disks with porta-
               ble EFI labels. Otherwise, disk drivers on platforms  of  different
               endianness will not recognize the disks.
    
               -f
    
                   Forcefully unmount all datasets, using the unmount -f command.
    
                   This command will forcefully export the pool.
    
    
    
           zpool get all | property[,...] pool ...
    
               Retrieves the given list of properties (or all properties if all is
               used) for the specified storage pool(s). These properties are  dis-
               played with the following fields:
    
                        name          Name of storage pool
                         property      Property name
                         value         Property value
                         source        Property source, either 'default' or 'local'.
    
    
               See  the "Properties" section for more information on the available
               pool properties.
    
    
           zpool history [-il] [pool] ...
    
               Displays the command history of the specified pools or all pools if
               no pool is specified.
    
               -i
    
                   Displays  internally logged ZFS events in addition to user ini-
                   tiated events.
    
    
               -l
    
                   Displays log records in long format, which in addition to stan-
                   dard format includes, the user name, the hostname, and the zone
                   in which the operation was performed.
    
    
    
           zpool import [-d path ... ] [-D]
           zpool import [-d path ... | -c cachefile] [-F [-n]] pool | id
    
               Lists pools available to import. If the -d option is not specified,
               this command searches for devices in /dev/dsk. The -d option can be
               specified multiple times, and all directories and device paths  are
               searched.  If  the  device  appears to be part of an exported pool,
               this command displays a summary of the pool with the  name  of  the
               pool,  a numeric identifier, as well as the vdev layout and current
               health of the device for each device or file. Pools that were  pre-
               viously  destroyed  with  the zpool destroy command, are not listed
               unless the -D option is specified.
    
               The numeric identifier is unique, and can be used  instead  of  the
               pool  name when multiple exported pools of the same name are avail-
               able.
    
               -c cachefile
    
                   Reads configuration from the given cachefile that  was  created
                   with  the  "cachefile"  pool  property.  This cachefile is used
                   instead of searching for devices.
    
    
               -d path
    
                   Searches for devices or files in path, where  path  where  path
                   can be a directory or a device path. The -d option can be spec-
                   ified multiple times.
    
    
               -D
    
                   Lists destroyed pools only.
    
    
    
           zpool import [-o mntopts] [ -o property=value] ... [-d path ... | -c
           cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n [-l]]] -a
    
               Imports  all pools found in the search directories or device paths.
               Identical to the previous command, except that  all  pools  with  a
               sufficient  number  of  devices  available are imported. Pools that
               were previously destroyed with the zpool destroy command,  are  not
               imported unless the -D option is specified.
    
               -o mntopts
    
                   Comma-separated  list  of  mount  options  to use when mounting
                   datasets within the pool. See  zfs(1M)  for  a  description  of
                   dataset properties and mount options.
    
    
               -o property=value
    
                   Sets  the  specified  property  on  the  imported pool. See the
                   "Properties" section for more information on the available pool
                   properties.
    
    
               -c cachefile
    
                   Reads  configuration  from the given cachefile that was created
                   with the "cachefile" pool  property.  This  cachefile  is  used
                   instead of searching for devices.
    
    
               -d path
    
                   Searches  for  devices  or  files in path. The -d option can be
                   specified multiple times. This option is incompatible with  the
                   -c option.
    
    
               -D
    
                   Imports destroyed pools only. The -f option is also required.
    
    
               -f
    
                   Forces  import,  even  if  the  pool  appears to be potentially
                   active.
    
    
               -F
    
                   Recovery mode for a non-importable pool. Attempt to return  the
                   pool to an importable state by discarding the last few transac-
                   tions. Not all damaged pools can be  recovered  by  using  this
                   option. If successful, the data from the discarded transactions
                   is irretrievably lost. This option is ignored if  the  pool  is
                   importable or already imported.
    
    
               -a
    
                   Searches for and imports all pools found.
    
    
               -m
    
                   Allows a pool to import when a log device is missing.
    
    
               -R root
    
                   Sets the cachefile property to none and the altroot property to
                   root.
    
    
               -N
    
                   Imports the pool without mounting or sharing any file systems.
    
    
               -n
    
                   Used with the -F recovery option.  Determines  whether  a  non-
                   importable  pool  can  be  made  importable again, but does not
                   actually perform the pool recovery. For more details about pool
                   recovery mode, see the -F option, above.
    
    
               -l
    
                   If  possible, have -n display information in current /dev/chas-
                   sis location form.
    
    
    
           zpool import [-o mntopts] [ -o property=value] ... [-d path ... | -c
           cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n [-l]]] [-t tmppool]
           pool | id [newpool]
    
               Imports a specific pool. A pool can be identified by  its  name  or
               the  numeric  identifier.  If  newpool  is  specified,  the pool is
               imported using  the  persistent  name  newpool.  Otherwise,  it  is
               imported  with  the same name as its exported name. Do not import a
               root pool with a new name. Otherwise, the system might not boot.
    
               If a device is removed from a system without running  zpool  export
               first,  the  device  appears  as  potentially  active. It cannot be
               determined if this was a failed export, or whether  the  device  is
               really  in  use  from another host. To import a pool in this state,
               the -f option is required.
    
               -o mntopts
    
                   Comma-separated list of mount  options  to  use  when  mounting
                   datasets  within  the  pool.  See  zfs(1M) for a description of
                   dataset properties and mount options.
    
    
               -o property=value
    
                   Sets the specified property  on  the  imported  pool.  See  the
                   "Properties" section for more information on the available pool
                   properties.
    
    
               -c cachefile
    
                   Reads configuration from the given cachefile that  was  created
                   with  the  cachefile  pool  property.  This  cachefile  is used
                   instead of searching for devices.
    
    
               -d path
    
                   Searches for devices or files in path. The  -d  option  can  be
                   specified  multiple times. This option is incompatible with the
                   -c option.
    
    
               -D
    
                   Imports destroyed pool. The -f option is also required.
    
    
               -f
    
                   Forces import, even if  the  pool  appears  to  be  potentially
                   active.
    
    
               -F
    
                   Recovery  mode for a non-importable pool. Attempt to return the
                   pool to an importable state by discarding the last few transac-
                   tions.  Not  all  damaged  pools can be recovered by using this
                   option. If successful, the data from the discarded transactions
                   is  irretrievably  lost.  This option is ignored if the pool is
                   importable or already imported.
    
    
               -R root
    
                   Sets the cachefile property to none and the altroot property to
                   root.
    
    
               -N
    
                   Imports the pool without mounting any file systems.
    
    
               -n
    
                   Used  with  the  -F  recovery option. Determines whether a non-
                   importable pool can be made  importable  again,  but  does  not
                   actually perform the pool recovery. For more details about pool
                   recovery mode, see the -F option, above.
    
    
               -l
    
                   If possible, have -n display information in current  /dev/chas-
                   sis location form.
    
    
               -m
    
                   Allows a pool to import when a log device is missing.
    
    
               -t tmppool
    
                   Use  the  specified temporary pool name for the initial import.
                   Implies -o cachefile=none.
    
    
    
           zpool iostat [-T d|u] [-v [-l]] [pool] ... [interval[count]]
    
               Displays I/O statistics for the given pools. When given  an  inter-
               val, the statistics are printed every interval seconds until Ctrl-C
               is pressed. If no pools are specified, statistics for every pool in
               the system is shown. If count is specified, the command exits after
               count reports are printed.
    
               -T d|u
    
                   Display a time stamp.
    
                   Specify d for standard date format. See date(1). Specify u  for
                   a  printed  representation  of  the  internal representation of
                   time. See time(2).
    
    
               -v
    
                   Verbose statistics. Reports  usage  statistics  for  individual
                   vdevs within the pool, in addition to the pool-wide statistics.
    
    
               -l
    
                   If  possible,  have  -v  display  vdev  statistics  in  current
                   /dev/chassis location form.
    
    
    
           zpool list [-H] [-o props[,...]] [-T d|u] [pool] ...
    
               Lists the given pools along with a health status and  space  usage.
               When given no arguments, all pools in the system are listed.
    
               When  given  an  interval, the status and space usage are displayed
               every interval seconds until Ctrl-C is entered. If count is  speci-
               fied, the command exits after count reports are displayed.
    
               -H
    
                   Scripted mode. Do not display headers, and separate fields by a
                   single tab instead of arbitrary space.
    
    
               -o props
    
                   Comma-separated list of properties to display. See the "Proper-
                   ties"  section for a list of valid properties. The default list
                   is name, size, allocated, free, capacity, health, altroot.
    
    
               -T d|u
    
                   Display a time stamp.
    
                   Specify d for standard date format. See date(1). Specify u  for
                   a  printed  representation  of  the  internal representation of
                   time. See time(2).
    
    
    
           zpool offline [-t] pool device ...
    
               Takes the specified physical device offline. While  the  device  is
               offline, no attempt is made to read or write to the device.
    
               This command is not applicable to spares or cache devices.
    
               -t
    
                   Temporary.  Upon  reboot, the specified physical device reverts
                   to its previous state.
    
    
    
           zpool online [-e] pool device...
    
               Brings the specified physical device online.
    
               This command is not applicable to spares or cache devices.
    
               -e
    
                   Expand the device to use all available space. If the device  is
                   part  of  a  mirror  or raidz then all devices must be expanded
                   before the new space will become available to the pool.
    
    
    
           zpool remove pool device ...
    
               Removes the specified device from the pool. This command  currently
               only  supports  removing hot spares, cache, and log devices. A mir-
               rored log device can be removed by specifying the top-level  mirror
               for the log. Non-log devices that are part of a mirrored configura-
               tion can be removed using the zpool detach  command.  Non-redundant
               and raidz devices cannot be removed from a pool.
    
    
           zpool replace [-f] pool old_device [new_device]
    
               Replaces  old_device with new_device. This is equivalent to attach-
               ing new_device, waiting for it  to  resilver,  and  then  detaching
               old_device.
    
               The size of new_device must be greater than or equal to the minimum
               size of all the devices in a mirror or raidz configuration.
    
               new_device is required if the pool is not redundant. If  new_device
               is  not specified, it defaults to old_device. This form of replace-
               ment is useful after an existing disk has failed and has been phys-
               ically  replaced.  In  this  case,  the  new disk may have the same
               /dev/dsk path as the old device, even though it is actually a  dif-
               ferent disk. ZFS recognizes this.
    
               In  zpool  status  output,  the  old_device is shown under the word
               replacing with the string /old appended to it.  Once  the  resilver
               completes,  both the replacing and the old_device are automatically
               removed. If the new device fails before the resilver completes  and
               a  third device is installed in its place, then both failed devices
               will show up with /old  appended,  and  the  resilver  starts  over
               again.  After the resilver completes, both /old devices are removed
               along with the word replacing.
    
               -f
    
                   Forces use of new_device, even if its appears to be in use. Not
                   all devices can be overridden in this manner.
    
    
    
           zpool scrub [-s] pool ...
    
               Begins  a scrub. The scrub examines all data in the specified pools
               to verify that it checksums correctly. For  replicated  (mirror  or
               raidz)  devices,  ZFS  automatically  repairs any damage discovered
               during the scrub. The zpool status command reports the progress  of
               the scrub and summarizes the results of the scrub upon completion.
    
               Scrubbing  and resilvering are very similar operations. The differ-
               ence is that resilvering only examines data that ZFS  knows  to  be
               out  of  date (for example, when attaching a new device to a mirror
               or replacing an existing device), whereas  scrubbing  examines  all
               data to discover silent errors due to hardware faults or disk fail-
               ure.
    
               Because scrubbing and resilvering are I/O-intensive operations, ZFS
               allows  only  one  at  a time. If a scrub is already in progress, a
               subsequent zpool scrub returns an error, with  the  advice  to  use
               zpool  scrub  -s  to  cancel the current scrub. If a resilver is in
               progress, ZFS does not allow a scrub to be started until the resil-
               ver completes.
    
               -s
    
                   Stop scrubbing.
    
    
    
           zpool set property=value pool
    
               Sets the given property on the specified pool. See the "Properties"
               section for more information on what  properties  can  be  set  and
               acceptable values.
    
    
           zpool split [-n [-l]] [-R altroot] [-o mntopts] [-o property=value]
           pool newpool [device ...]
    
               Splits off one disk from each mirrored top-level vdev in a pool and
               creates a new pool from the split-off disks. The original pool must
               be made up of one or more mirrors and must not be in the process of
               resilvering.  The  split subcommand chooses the last device in each
               mirror vdev unless overridden by a device specification on the com-
               mand line.
    
               When   using  a  device  argument,  split  includes  the  specified
               device(s) in a new pool and, should any devices remain unspecified,
               assigns  the  last  device  in each mirror vdev to that pool, as it
               does normally. If you are uncertain about the outcome  of  a  split
               command,  use the -n ("dry-run") option to ensure your command will
               have the effect you intend.
    
               -n
    
                   Displays the configuration that would be created without  actu-
                   ally splitting the pool. The actual pool split could still fail
                   due to insufficient privileges or device status.
    
    
               -l
    
                   If possible, have  -n  display  the  configuration  in  current
                   /dev/chassis location form.
    
    
               -R altroot
    
                   Automatically  import  the  newly created pool after splitting,
                   using the specified altroot parameter for the new pool's alter-
                   nate root. See the altroot description in the "Properties" sec-
                   tion, above.
    
    
               -o mntopts
    
                   Comma-separated list of mount  options  to  use  when  mounting
                   datasets  within  the  pool.  See  zfs(1M) for a description of
                   dataset properties and mount options. Valid only in conjunction
                   with the -R option.
    
    
               -o property=value
    
                   Sets  the  specified property on the new pool. See the "Proper-
                   ties" section, above, for more  information  on  the  available
                   pool properties.
    
    
    
           zpool status [-l] [-v] [-x] [-T d|u] [pool] ...  [interval[count]]
    
               Displays the detailed health status for the given pools. If no pool
               is specified, then the status of each pool in the  system  is  dis-
               played.  For  more  information  on pool and device health, see the
               "Device Failure and Recovery" section.
    
               When given an interval, the status and space  usage  are  displayed
               every  interval seconds until Ctrl-C is entered. If count is speci-
               fied, the command exits after count reports are displayed.
    
               If a scrub or resilver is in progress,  this  command  reports  the
               percentage done and the estimated time to completion. Both of these
               are only approximate, because the amount of data in  the  pool  and
               the other workloads on the system can change.
    
               -l
    
                   If  possible, display vdev status in current /dev/chassis loca-
                   tion form.
    
    
               -x
    
                   Display status only for pools that are exhibiting errors or are
                   otherwise unavailable.
    
    
               -v
    
                   Displays  verbose  data  error information, printing out a com-
                   plete list of all data errors  since  the  last  complete  pool
                   scrub.
    
    
               -T d|u
    
                   Display a time stamp.
    
                   Specify  d for standard date format. See date(1). Specify u for
                   a printed representation  of  the  internal  representation  of
                   time. See time(2).
    
    
    
           zpool upgrade
    
               Identifies  a  pool's  on-disk  version, which determines available
               pool features in the currently running software  release.  You  can
               continue to use older pool versions, but some features might not be
               available. A pool can be upgraded by using  the  zpool  upgrade  -a
               command.  You  will not be able to access a pool of a later version
               on a system that runs an earlier software version.
    
    
           zpool upgrade -v
    
               Displays ZFS pool versions supported by the current  software.  The
               current  ZFS  pool versions and all previous supported versions are
               displayed, along with an explanation of the features provided  with
               each version.
    
    
           zpool upgrade [-V version] -a | pool ...
    
               Upgrades  the specified pool to the latest on-disk version. If this
               command reveals that a pool is out-of-date,  the  pool  can  subse-
               quently be upgraded using the zpool upgrade -a command. A pool that
               is upgraded will not be accessible on a system that runs an earlier
               software release.
    
               -a
    
                   Upgrades all pools.
    
    
               -V version
    
                   Upgrade to the specified version, which must be higher than the
                   current version. If the -V flag is not specified, the  pool  is
                   upgraded to the most recent version.
    
    
    
           zpool monitor -t provider [-T d|u] [[-p] -o field[,. . .]] [pool] . . .
           [interval [count]]
    
               Displays status or progress information for the given pools. If  no
               pool is entered, information for all pools is displayed. When given
               an interval, the information  is  printed  every  interval  seconds
               until  Ctrl-C  is pressed. If count is specified, the command exits
               after count reports are printed.
    
               -o field[,. . .]    Display only selected field(s).
    
    
               -t provider         Display data from the listed providers. Current
                                   providers are send, receive (or recv), destroy,
                                   scrub, and  resilver.  An  up-to-date  list  of
                                   providers  is  available from 'zpool help moni-
                                   tor'.
    
    
               -T d|u              Display a time stamp. Specify  d  for  standard
                                   date  format.  See  date(1).  Specify  u  for a
                                   printed representation of the  internal  repre-
                                   sentation of time. See time(2).
    
    
    
       Display Fields
           The  fields  are  different  for  different  providers.  If  a field is
           selected that is not supported by a provider an error is returned.
    
           DONE        Amount of data completed or processed so far.
    
    
           OTHER       Provider dependent. Provides extra information such as  the
                       current  item  being  processed or the current state of the
                       task. For example, in a zfs send operation this value might
                       reflect  the individual dataset or snapshot currently being
                       sent. The specific values reported  as  OTHER  are  not  an
                       interface and may change without notice.
    
    
           PCTDONE     Percentage of data processed.
    
    
           POOL        Pool information was retrieved from.
    
    
           PROVIDER    Task  providing  the  information.  One  of  send, receive,
                       destroy, scrub, or resilver.
    
    
           SPEED       Units per second. Usually bytes, but is dependent  on  what
                       unit the data provider uses.
    
    
           STRTTIME    Time the provider started on the displayed task.
    
    
           TAG         A  TAG  disambiguates whole operations. It is unique at any
                       one time, but values can repeat in  subsequent  operations.
                       For  instance,  two simultaneous sends would have different
                       TAGs even if sending the same dataset.
    
    
           TIMELEFT    A relative time that this task will  be  completed.  It  is
                       calculated off of rate the data is being processed.
    
    
           TIMESTMP    Time the monitored data snapshot was taken.
    
    
           TOTAL       Estimate of total amount of data to be processed.
    
    
       Parseable Output Format
           The  "zpool  monitor" command provides a -p option that displays output
           in a machine-parsable format. The output format is one or more lines of
           colon (:) delimited fields. Output includes only those fields requested
           by means of the -o option, in the order requested. Note that the -o all
           option, which displays all the fields cannot be used with parsable out-
           put option.
    
    
           When you request multiple fields,  any  literal  colon  characters  are
           escaped  by a backslash ( before being output. Similarly, literal back-
           slash characters are also escaped (\). This escape format is  parseable
           by  using  shell read(1) functions with the environment variable set as
           IFS=:. Note that escaping is not done when you request  only  a  single
           field.
    
    EXAMPLES
           Example 1 Creating a RAID-Z Storage Pool
    
    
           The following command creates a pool with a single raidz root vdev that
           consists of six disks.
    
    
             # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
    
    
    
           Example 2 Creating a Mirrored Storage Pool
    
    
           The following command creates a pool with two mirrors, where each  mir-
           ror contains two disks.
    
    
             # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
    
    
    
    
           Alternatively,  whole  disks  can be specified using /dev/chassis paths
           describing the disk's current location.
    
    
             # zpool create tank \
                 mirror \
                     /dev/chassis/RACK29.U01-04/DISK_00/disk \
                     /dev/chassis/RACK29.U05-08/DISK_00/disk \
                 mirror \
                     /dev/chassis/RACK29.U01-04/DISK_01/disk \
                     /dev/chassis/RACK29.U05-08/DISK_01/disk
    
    
    
           Example 3 Adding a Mirror to a ZFS Storage Pool
    
    
           The following command adds two mirrored disks to the pool tank,  assum-
           ing  the  pool  is  already  made up of two-way mirrors. The additional
           space is immediately available to any datasets within the pool.
    
    
             # zpool add tank mirror c1t0d0 c1t1d0
    
    
    
           Example 4 Listing Available ZFS Storage Pools
    
    
           The following command lists all available pools on the system.
    
    
             # zpool list
             NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
             pool   278G  4.19G  274G   1%  1.00x  ONLINE  -
             rpool  278G  78.2G  200G  28%  1.00x  ONLINE  -
    
    
    
           Example 5 Listing All Properties for a Pool
    
    
           The following command lists all the properties for a pool.
    
    
             % zpool get all pool
             NAME  PROPERTY       VALUE                SOURCE
             pool  allocated      4.19G                -
             pool  altroot        -                    default
             pool  autoexpand     off                  default
             pool  autoreplace    off                  default
             pool  bootfs         -                    default
             pool  cachefile      -                    default
             pool  capacity       1%                   -
             pool  dedupditto     0                    default
             pool  dedupratio     1.00x                -
             pool  delegation     on                   default
             pool  failmode       wait                 default
             pool  free           274G                 -
             pool  guid           1907687796174423256  -
             pool  health         ONLINE               -
             pool  listshares     off                  local
             pool  listsnapshots  off                  default
             pool  readonly       off                  -
             pool  size           278G                 -
             pool  version        34                   default
    
    
    
           Example 6 Destroying a ZFS Storage Pool
    
    
           The following command destroys the pool "tank" and  any  datasets  con-
           tained within.
    
    
             # zpool destroy -f tank
    
    
    
           Example 7 Exporting a ZFS Storage Pool
    
    
           The following command exports the devices in pool tank so that they can
           be relocated or later imported.
    
    
             # zpool export tank
    
    
    
           Example 8 Importing a ZFS Storage Pool
    
    
           The following command displays available pools, and  then  imports  the
           pool "tank" for use on the system.
    
    
    
           The results from this command are similar to the following:
    
    
             # zpool import
               pool: tank
                 id: 7678868315469843843
              state: ONLINE
             action: The pool can be imported using its name or numeric identifier.
             config:
    
                           tank  ONLINE
                       mirror-0  ONLINE
                         c1t2d0  ONLINE
                         c1t3d0  ONLINE
    
             # zpool import tank
    
    
    
           Example 9 Upgrading All ZFS Storage Pools to the Current Version
    
    
           The  following  command  upgrades  all ZFS Storage pools to the current
           version of the software.
    
    
             # zpool upgrade -a
             This system is currently running ZFS pool version 22.
    
             All pools are formatted using this version.
    
    
    
           Example 10 Managing Hot Spares
    
    
           The following command creates a new pool with an available hot spare:
    
    
             # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
    
    
    
    
           If one of the disks were to fail, the pool  would  be  reduced  to  the
           degraded  state.  The failed device can be replaced using the following
           command:
    
    
             # zpool replace tank c0t0d0 c0t3d0
    
    
    
    
           After the device  has  been  resilvered,  the  spare  is  automatically
           detached  and  is  made  available  should another device fail. The hot
           spare can be permanently removed from the pool using the following com-
           mand:
    
    
             # zpool remove tank c0t2d0
    
    
    
           Example 11 Creating a ZFS Pool with Separate Mirrored Log Devices
    
    
           The  following  command  creates  a ZFS storage pool consisting of two,
           two-way mirrors and mirrored log devices:
    
    
             # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
                c4d0 c5d0
    
    
    
           Example 12 Adding Cache Devices to a ZFS Pool
    
    
           The following command adds two disks for use as cache devices to a  ZFS
           storage pool:
    
    
             # zpool add pool cache c2d0 c3d0
    
    
    
    
           Once  added,  the  cache  devices gradually fill with content from main
           memory. Depending on the size of your cache devices, it could take over
           an hour for them to fill. Capacity and reads can be monitored using the
           iostat option as follows:
    
    
             # zpool iostat -v pool 5
    
    
    
           Example 13 Removing a Mirrored Log Device
    
    
           Given the configuration shown immediately below, the following  command
           removes the mirrored log device mirror-2 in the pool tank.
    
    
                pool: tank
               state: ONLINE
               scrub: none requested
             config:
    
                      NAME        STATE     READ WRITE CKSUM
                      tank        ONLINE       0     0     0
                        mirror-0  ONLINE       0     0     0
                          c6t0d0  ONLINE       0     0     0
                          c6t1d0  ONLINE       0     0     0
                        mirror-1  ONLINE       0     0     0
                          c6t2d0  ONLINE       0     0     0
                          c6t3d0  ONLINE       0     0     0
                      logs
                        mirror-2  ONLINE       0     0     0
                          c4t0d0  ONLINE       0     0     0
                          c4t1d0  ONLINE       0     0     0
    
    
    
             # zpool remove tank mirror-2
    
    
    
           Example 14 Recovering a Faulted ZFS Pool
    
    
           If  a  pool is faulted but recoverable, a message indicating this state
           is provided by zpool status if  the  pool  was  cached  (see  cachefile
           above),  or  as  part of the error output from a failed zpool import of
           the pool.
    
    
    
           Recover a cached pool with the zpool clear command:
    
    
             # zpool clear -F data
             Pool data returned to its state as of Thu Jun 07 10:50:35 2012.
             Discarded approximately 29 seconds of transactions.
    
    
    
    
           If the pool configuration was not cached, use  zpool  import  with  the
           recovery mode flag:
    
    
             # zpool import -F data
             Pool data returned to its state as of Thu Jun 07 10:50:35 2012.
             Discarded approximately 29 seconds of transactions.
    
    
    
           Example 15 Importing a ZFS Pool with a Missing Log Device
    
    
           The  following  examples  illustrate  attempts  to import a pool with a
           missing log device. The -m option is used to complete the import opera-
           tion.
    
    
    
           Additional  devices  are  known  to  be part of this pool, though their
           exact configuration cannot be determined.
    
    
             # zpool import tank
             The devices below are missing, use '-m' to import the pool anyway:
                          c5t0d0 [log]
    
             cannot import 'tank': one or more devices is currently unavailable
    
             # zpool import -m tank
             # zpool status tank
                pool: tank
               state: DEGRADED
             status: One or more devices could not be opened.  Sufficient replicas
             exist for
                      the pool to continue functioning in a degraded state.
             action: Attach the missing device and online it using 'zpool online'.
                 see: http://www.support.oracle.com/msg/ZFS-8000-2Q
                scan: none requested
             config:
    
                      NAME                   STATE     READ WRITE CKSUM
                      tank                   DEGRADED     0     0     0
                        c7t0d0               ONLINE       0     0     0
                      logs
                        1693927398582730352  UNAVAIL      0     0     0  was
             /dev/dsk/c5t0d0
    
             errors: No known data errors
    
    
    
    
           The following example shows how to import a pool with  a  missing  mir-
           rored log device.
    
    
             # zpool import tank
             The devices below are missing, use ?-m? to import the pool anyway:
             mirror-1 [log]
             c5t0d0
             c5t1d0
    
             # zpool import -m tank
    
             # zpool status tank
                pool: tank
               state: DEGRADED
             status: One or more devices could not be opened.  Sufficient replicas
             exist for the pool to continue functioning in a degraded state.
             action: Attach the missing device and online it using 'zpool online'.
                 see: http://www.support.oracle.com/msg/ZFS-8000-2Q
                 scan: none requested
             config:
    
                      NAME                      STATE     READ WRITE CKSUM
                      tank                      DEGRADED     0     0     0
                        c7t0d0                  ONLINE       0     0     0
                      logs
                        mirror-1                UNAVAIL      0     0     0
             insufficient replicas
                          46385995713041169     UNAVAIL      0     0     0  was
             /dev/dsk/c5t0d0
                          13821442324672734438  UNAVAIL      0     0     0  was
             /dev/dsk/c5t1d0
    
             errors: No known data errors
    
    
    
           Example 16 Importing a Pool By a Specific Path
    
    
           The  following  command imports the pool tank by identifying the pool's
           specific device paths, /dev/dsk/c9t9d9  and  /dev/dsk/c9t9d8,  in  this
           example.
    
    
             # zpool import -d /dev/dsk/c9t9d9s0 /dev/dsk/c9t9d8s0 tank
    
    
    
    
           An  existing  limitation  is that even though this pool is comprised of
           whole disks, the command must include the specific device's slice iden-
           tifier.
    
    
           Example 17 Obtaining Parseable Output
    
    
           The  following command is used to obtain parseable output and will pro-
           vide one interval.
    
    
             # zpool monitor -p -o pool,pctdone,other -t send poolA poolC
             poolA:20.4:poolA/fs2/team2@fs2_all
             poolA:0.0:poolA/fs2/team2@all
             poolA:28.6:poolA/fs1/team3@fs1_all
             poolC:33.3:poolC/fs1/team2@fs1_all
             poolC:50.0:poolC/fs2/team1@fs2_all
    
    
    
    EXIT STATUS
           The following exit values are returned:
    
           0
    
               Successful completion.
    
    
           1
    
               An error occurred.
    
    
           2
    
               Invalid command line options were specified.
    
    
    ATTRIBUTES
           See attributes(5) for descriptions of the following attributes:
    
    
    
    
           +-----------------------------+-----------------------------+
           |      ATTRIBUTE TYPE         |      ATTRIBUTE VALUE        |
           +-----------------------------+-----------------------------+
           |Availability                 |system/file-system/zfs       |
           +-----------------------------+-----------------------------+
           |Interface Stability          |Committed                    |
           +-----------------------------+-----------------------------+
    
    SEE ALSO
           ps(1), zfs(1M), attributes(5), SDC(7)
    
    NOTES
           Each ZFS storage pool has an associated process, zpool-poolname,  visi-
           ble  in  such tools as ps(1). A user has no interaction with these pro-
           cesses. See SDC(7).
    
    
    
    SunOS 5.11                        25 Mar 2015                        zpool(1M)
    

Log in to reply
 

© Lightnetics 2024