1. Mục đích
Ứng dụng chạy zfs dùng cache từ RAM --> Ứng dụng chạy nhanh
2. Check
-- Thông tin của 1 mount point
root@app-2 # zpool status u03
pool: u03
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
u03 ONLINE 0 0 0
c4t60060E80056F530000006F5300000560d0 ONLINE 0 0 0
errors: No known data errors
-- Check phân vùng đã được mount: zpool list, zfs list
root@sol10 # zfs list
NAME USED AVAIL REFER MOUNTPOINT
sms 412G 509G 411G /sms
root@sol10 # zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
sms 936G 412G 524G 44% ONLINE -
3. TT Mount phân vùng mới, Check dung lượng SAN
Scan phân vùng mới | luxadm -e port luxadm -e forcelip /devices/pci@2,600000/SUNW,emlxs@0/fp@0,0:devctl (luxadm -e forcelip name_of_hba)
devfsadm -Cv echo |format |
Format phân vùng mới, thực hiện trên 1 node | format -e -->
|
Tạo file system mới | # zpool create pool_name c1t0d0 c1t1d0 # Raid 0 # zpool create pool_name mirror c1d0 c2d0 mirror c3d0 c4d0 # Raid 1-0 # zpool create tank raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 /dev/dsk/c5t0d0 #RAID-Z
From <https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html> Creating a ZFS pool We can create a ZFS pool using different devices as: a. using whole disks b. using disk slices c. using files a. Using whole disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 /pci@0,0/pci15ad,1976@10/sd@0,0 1. c1t1d0 <VMware,-VMware Virtual S-1.0-1.00GB> /pci@0,0/pci15ad,1976@10/sd@1,0 2. c1t2d0 /pci@0,0/pci15ad,1976@10/sd@2,0 3. c1t3d0 /pci@0,0/pci15ad,1976@10/sd@3,0 4. c1t4d0 /pci@0,0/pci15ad,1976@10/sd@4,0 Specify disk (enter its number): Specify disk (enter its number): I will not be using the OS disk (disk0). # zpool create geekpool c1t1d0 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 1008M 78.5K 1008M 0% ONLINE - To destroy the pool : # zpool destroy geekpool # zpool list no pools available b. Using disk slices Now we will create a disk slice on disk c1t1d0 as c1t1d0s0 of size 512 MB. # zpool create geekpool c1t1d0s0 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 504M 78.5K 504M 0% ONLINE - c. Using files We can also create a zpool with files. Make sure you give an absolute path while creating a zpool # mkfile 100m file1 # zpool create geekpool /file1 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 95.5M 115K 95.4M 0% ONLINE - Creating pools with Different RAID levels Now we can create a zfs pool with different RAID levels: 1. Dynamic strip – Its a very basic pool which can be created with a single disk or a concatenation of disk. We have already seen zpool creation using a single disk in the example of creating zpool with disks. Lets see how we can create concatenated zfs pool. # zpool create geekpool c1t1d0 c1t2d0 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 1.97G 80K 1.97G 0% ONLINE - # zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors This configuration does not provide any redundancy. Hence any disk failure will result in a data loss. Also note that once a disk is added in this fashion to a zfs pool may not be removed from the pool again. Only way to free the disk is to destroy entire pool. This happens due to the dynamic striping nature of the pool which uses both disk to store the data. 2. Mirrored pool a. 2 way mirror A mirrored pool provides you the redundancy which enables us to store multiple copies of data on different disks. Here you can also detach a disk from the pool as the data will be available on the another disks. # zpool create geekpool mirror c1t1d0 c1t2d0 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 1008M 78.5K 1008M 0% ONLINE - # zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors b. 3 way mirror # zpool destroy geekpool # zpool create geekpool mirror c1t1d0 c1t2d0 c1t3d0 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 1008M 78.5K 1008M 0% ONLINE - # zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors 2. RAID-Z pools Now we can also have a pool similar to a RAID-5 configuration called as RAID-Z. RAID-Z are of 3 types raidz1 (single parity) and raidz2 (double parity) and rzidz3 (triple parity). Lets us see how we can configure each type. Minimum disk requirements for each type Minimum disks required for each type of RAID-Z 1. raidz1 – 2 disks 2. raidz2 – 3 disks 3. raidz3 – 4 disks a. raidz1 # zpool create geekpool raidz c1t1d0 c1t2d0 c1t3d0 # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 2.95G 166K 2.95G 0% ONLINE - # zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors b. raidz2 #zpool create geekpool raidz2 c1t1d0 c1t2d0 c1t3d0 #zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT geekpool 2.95G 186K 2.95G 0% ONLINE - #zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors c. raidz3 # zpool create geekpool raidz3 c1t1d0 c1t2d0 c1t3d0 c1t4d0 # zfs list NAME USED AVAIL REFER MOUNTPOINT geekpool 61K 976M 31K /geekpool # zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 errors: No known data errors Adding spare device to zpool By adding a spare device to a zfs pool the failed disks is automatically replaced by the space device and administrator can replace the failed diks ata later point in time. We can aslo share the spare device among multiple zfs pools. # zpool add geekpool spare c1t3d0 # zpool status pool: geekpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM geekpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 spares c1t3d0 AVAIL errors: No known data errors Make sure you turn on the autoreplace feature (zfs attribute) on the geekpool # zpool autoreplace=on mpool Dry run on zpool creation You can do a dry run and test the result of a pool creation before actually creating it. # zpool create -n geekpool raidz2 c1t1d0 c1t2d0 c1t3d0 would create 'geekpool' with the following layout: geekpool raidz2 c1t1d0 c1t2d0 c1t3d0 Importing and exporting Pools You may need to migrate the zfs pools between systems. ZFS makes this possible by exporting a pool from one system and importing it to another system. a. Exporting a ZFS pool To import a pool you must explicitly export a pool first from the source system. Exporting a pool, writes all the unwritten data to pool and remove all the information of the pool from the source system. # zpool export geekpool # zpool list no pools available In a case where you have some file systems mounted, you can force the export # zpool export -f geekpool b. Importing a ZFS pool Now we can import the exported pool. To know which pools can be imported, run import command without any options. # zpool import pool: geekpool id: 940735588853575716 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: geekpool ONLINE raidz3-0 ONLINE c1t1d0 ONLINE c1t2d0 ONLINE c1t3d0 ONLINE c1t4d0 ONLINE As you can see in the output each pool has a unique ID, which comes handy when you have multiple pools with same names. In that case a pool can be imported using the pool ID. # zpool import 940735588853575716 Importing Pools with files By default import command searches /dev/dsk for pool devices. So to see pools that are importable with files as their devices we can use : # zpool import -d / pool: geekfilepool id: 8874031618221759977 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: geekfilepool ONLINE //file1 ONLINE //file2 ONLINE Okay all said and done, Now we can import the pool we want : # zpool import geekpool # zpool import -d / geekfilepool Similar to export we can force a pool import # zpool import -f geekpool Creating a ZFS file system The best part about zfs is that oracle(or should I say Sun) has kept the commands for it pretty easy to understand and remember. To create a file system fs1 in an existing zfs pool geekpool: # zfs create geekpool/fs1 # zfs list NAME USED AVAIL REFER MOUNTPOINT geekpool 131K 976M 31K /geekpool geekpool/fs1 31K 976M 31K /geekpool/fs1 Now by default when you create a filesystem into a pool, it can take up all the space in the pool. So too limit the usage of file system we define reservation and quota. Let us consider an example to understand quota and reservation. Suppose we assign quota = 500 MB and reservation = 200 MB to the file system fs1. We also create a new file system fs2 without any quota and reservation. So now for fs1 200 MB is reserved out of 1GB (pool size) , which no other file system can have it. It can also take upto 500 MB (quota) out of the pool , but if its is free. So fs2 has right to take up upto 800 MB (1000 MB – 200 MB) of pool space. So if you don’t want the space of a file system to be taken up by other file system define reservation for it. One more thing, reservation can’t be greater than quota if it is already defined. On ther other hand when you do a zfs list , you would be able to see the available space for the file system equal to the quota defined for it (if space not occupied by other file systems) and not the reservation as expected. To set servation and quota on fs1 as stated above: # zfs set quota=500m geekpool/fs1 # zfs set reservation=200m geekpool/fs1 # zfs list NAME USED AVAIL REFER MOUNTPOINT geekpool 200M 776M 32K /geekpool geekpool/fs1 31K 500M 31K /geekpool/fs1 To set mount point for the file system By default a mount point (/poolname/fs_name) will be created for the file system if you don’t specify. In our case it was /geekpool/fs1. Also you do not have to have an entry of the mount point in /etc/vfstab as it is stored internally in the metadata of zfs pool and mounted automatically when system boots up. If you want to change the mount point : # zfs set mountpoint=/test geekpool/fs1 # df -h |grep /test geekpool/fs1 500M 31K 500M 1% /test Other important attributes You may also change some other important attributes like compression, sharenfs etc.. Also we can specify attributes while creating the file system itself. # zfs create -o mountpoint=/test geekpool/fs1
From <https://www.thegeekdiary.com/zfs-tutorials-creating-zfs-pools-and-file-systems/> |
Mount phân vùng trên máy chủ | # mkdir /u02 # mount /u02 |
|
|
Check |
|
How to Add Swap Space in an Oracle Solaris ZFS Root Environment |
|
Hay dùng |
|
4. Tunning ZFS (phải restart lại server)
root@app-2 # tail -f /etc/system
* To set a variable named 'debug' in the module named 'test_module'
*
* set test_module:debug = 0x13
set md:mirrored_root_flag = 1
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
set zfs:zfs_arc_max=8589934592
-- Boot lại hệ thống (init 6)
5. Add thêm đĩa
zpool attach gprs c6t60060E8004A53B000000A53B00000160d0 c6t6005076307FFD2BD0000000000000120d0
6. off tạm ổ khỏi rpool
zpool offline rpool c0t1d0s0