Sunday, December 16, 2012

create command ping and deny any order command

[root]# which bash
/bin/bash
Create symbolic link
ln -s /bin/bash /opt/hipbash
useradd hip -s /opt/hipbash
passwd hip

mkdir /home/hip/bin


VD: create command ping and deny any order command

ln -s /bin/ping /home/hip/bin/ping

chown root. /home/hip/.bash_profile
chmod 755 /home/hip/.bash_profile

vi /home/hip/.bash_profile
search and replace PATH
PATH=$HOME/bin

Tuesday, October 23, 2012

PHP 5.3 on CentOS/RHEL 5.8 via Yum

I have compiled the latest PHP version, 5.3.18, and put it in the Webtatic repository for easy installation. It is compiled for CentOS 5 i386 and x86_64, and the source RPMS are provided in the repo, if anyone wants to compile it for another OS or architecture.
Update 2012-03-04 – Webtatic now has released PHP 5.4.0 for CentOS/RHEL 6
Update 2010-03-03 – I’ve added both apc 3.1.3p1 beta (php-pecl-apc in yum) and eAccelerator 0.9.6 (php-eaccelerator in yum) RPMs to the repository, they are compiled for (and work on) php 5.3.x
Update 2009-09-01 – Added a note about deprecated errors, and how to silence them. Also I have included a tip that might help those of you struggling to install.
Update 2009-07-03 – I updated the version to PHP 5.3, which was released a few days before. This includes many new features such as closures, namespaces, and packaged scripts in phar files, which I’ll blog about soon. Check out PHP changelog for more details.

I have also included the same php extensions I mentioned in my other article, php-mcrypt, php-mhash (in PHP 5.2), php-mssql and php-tidy
To install, first you must install the yum repository information:

rpm -Uvh http://repo.webtatic.com/yum/centos/5/latest.rpm

Now you can install php by doing:
 
yum --enablerepo=webtatic install php

Or update an existing installation of php, which will also update all of the other php modules installed:
 
yum --enablerepo=webtatic update php

Packages

Package Provides
php mod_php
php-bcmath
php-cli php-cgi, php-pcntl, php-readline
php-common php-api, php-bz2, php-calendar, php-ctype, php-curl, php-date, php-exif, php-fileinfo, php-ftp, php-gettext, php-gmp, php-hash, php-iconv, php-json, php-libxml, php-openssl, php-pcre, php-pecl-Fileinfo, php-pecl-phar, php-pecl-zip, php-reflection, php-session, php-shmop, php-simplexml, php-sockets, php-spl, php-tokenizer, php-zend-abi, php-zip, php-zlib
php-dba
php-devel
php-eaccelerator
php-embedded php-embedded-devel
php-fpm
php-gd
php-imap
php-intl
php-ldap
php-mbstring
php-mcrypt
php-pecl-apc
php-pecl-memcache
php-pecl-xdebug
php-mssql php-pdo_dblib
php-mysql php-mysqli, php-pdo_mysql, php_database
php-odbc php-pdo_odbc, php_database
php-pdo
php-pgsql php-pdo_pgsql, php_database
php-process php-posix, php-sysvmsg, php-sysvsem, php-sysvshm
php-pspell
php-recode
php-snmp
php-soap
php-suhosin
php-tidy
php-xml php-dom, php-domxml, php-wddx, php-xsl
php-xmlrpc
php-zts

“Depsolving” problems

If you get depsolving problems when updating, you may have currently installed some extensions that have been removed, e.g. php-mhash, php-ncurses.
You will need to remove them before upgrading.

yum remove php-mhash php-ncurses

Timezone Errors

If you have not set the default timezone for dates, you will get PHP warnings and in some cases fatal errors (e.g. when using the DateTime object). PHP will by default use the system’s timezone if not a fatal error, however either in your application or the php.ini, you should set the setting date.timezone. It’s more ideally set in the application, which should be aware of it’s own timezone setting.

Deprecated Errors

Once you are running the new version, you may get “deprecated” errors in your error logs. This isn’t bad, it just means to tell you that some of the functions you are using are no longer prefered, and may be removed in a future major release. An example of this is the ereg functions. Preg functions are prefered over these, as they are much faster and more powerful, and in all cases do at least the same thing.
If upgrading the functions are not an option, and you would like to hide the deprecated errors from your error log, for example on a production server, just edit your /etc/php.ini file, find the line:
 
error_reporting  =  E_ALL

and replace to:
 
error_reporting  =  E_ALL & ~E_DEPRECATED

PHP 5.2.17

I have previously been maintaining PHP 5.2.* releases, but since it is now end-of-line, there are no security fixes for known critical security issues. I wouldn’t recommend using these anymore because of this, however they are still in the repository for existing users relying on them.

Thursday, October 4, 2012

Remote shutdown and reboot computer window

1. Login kết nối với computer trước
\\17.18.120.230
2. mở command line dùng lệnh

shutdown /m \\17.18.120.230 /r /f



Logofff windows từ xa

C:\Users\Desktop\PSTools>qwinsta /server:10.151.120.226
 SESSIONNAME       USERNAME                 ID  STATE   TYPE        DEVICE
 console                                     0  Conn    wdcon
 rdp-tcp                                 65536  Listen  rdpwd
 rdp-tcp#3         tt4_trafic                1  Active  rdpwd
 rdp-tcp#4         Administrator             2  Active  rdpwd


C:\Users\Desktop\PSTools>logoff 2 /server:10.151.120.226

C:\Users\Desktop\PSTools>qwinsta /server:10.151.120.226
 SESSIONNAME       USERNAME                 ID  STATE   TYPE        DEVICE
 console                                     0  Conn    wdcon
 rdp-tcp                                 65536  Listen  rdpwd
 rdp-tcp#3         tt4_trafic                1  Active  rdpwd

======================
Hướng dẫn dùng service PsExec thực hiện command line từ xa 

C:\Users\Desktop\PSTools>PsExec.exe \\10.151.120.226 -u domain\username -p
password1 cmd


PsExec v2.11 - Execute processes remotely
Copyright (C) 2001-2014 Mark Russinovich
Sysinternals - www.sysinternals.com


Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.

C:\WINDOWS\system32>exit
cmd exited on 10.151.120.226 with error code 0.

C:\Users\Desktop\PSTools>PsExec.exe \\10.151.120.226 -u domain\username -p
password1
ipconfig /all


Ethernet adapter Network Bridge:

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : MAC Bridge Miniport
   Physical Address. . . . . . . . . : 02-18-8B-E5-DA-5A
   DHCP Enabled. . . . . . . . . . . : No
   IP Address. . . . . . . . . . . . : 10.151.120.226
   Subnet Mask . . . . . . . . . . . : 255.255.255.192
   Default Gateway . . . . . . . . . : 10.151.120.193
   DNS Servers . . . . . . . . . . . : 10.151.120.206
                                       10.151.120.207
ipconfig exited on 10.151.120.226 with error code 0.


Add swap solaris 10 using ZFS

When pca wanted to install 144500-19, patchadd aborted with:
Running patchadd
Validating patches...
Loading patches installed on the system...
Done!
Loading patches requested to install.
[...]
Checking patches that you specified for installation.

Done!
Unable to install patch. Not enough space in /var/run to copy overlay objects.
 401MB needed, 220MB available.

Failed (exit code 1)
Well, this Sun Enterprise 250 only has 768 MB memory, not too much in these days. Let's add some virtual memory then: 
 
# mkfile 1g /var/tmp/swap.tmp
# swap -a /var/tmp/swap.tmp
/var/tmp/swap.tmp: Invalid operation for this filesystem type
Oh, right - we're on ZFS already. Let's try again:
# rm /var/tmp/swap.tmp
# zfs create -V 1gb rpool/tmpswap
# swap -a /dev/zvol/dsk/rpool/tmpswap 
# df -h /var/run 
Filesystem             size   used  avail capacity  Mounted on
swap                   1.4G   107M   1.3G     8%    /var/run
Now we should be good to go :-)

Oh, and regarding those "overlay objects in /var/run" mentioned above: once patchadd(1M) is running, take a look:
# df -h | grep -c /var/run
991


Sunday, September 16, 2012

Script auto click for linux

#!/bin/bash

# optional ##########################################
#MOZWIN=$(xdotool search --title "Mozilla Firefox")
#MOZDESKTOP=$(xdotool get_desktop_for_window $MOZWIN)
#xdotool set_desktop $MOZDESKTOP
#xdotool windowactivate $MOZWIN
#####################################################

FARMROWS=12
CLICK_INTERVAL=0.1

# start position, starting in the most left square
X=350
Y=510

# square jump distance
XDIS=25
YDIS=12

for((i=0;$i<=$(($FARMROWS-1));i=$(($i+1))));do

    for((j=0;$j<=$(($FARMROWS-1));j=$(($j+1))));do

        x=$(($X+(($j*$XDIS))))
        y=$(($Y-(($j*$YDIS))))

        xdotool mousemove $x $y && xdotool click 1
        echo "$x $y  i=$i  j=$j"

        sleep $CLICK_INTERVAL;
    done

         X=$(($X+$XDIS))
         Y=$(($Y+$YDIS))

done

Thursday, August 16, 2012

Install Nginx php mysql phpmyadmin for Mac OS

Mac comes with PHP installed already, but instead of trying to get that version to work with everything I instead just used macports to install a clean version of PHP for me to configure to how I like things to run completely separate from the mac's default setup.

Install all packages


sudo port install php5 +fastcgi fcgi
 

Nginx Install

Enough foreplay, lets get to it, install nginx first.
  1. sudo port install nginx
Copy working configuration default into its own configuration
  1. sudo cp /opt/local/etc/nginx/nginx.conf.default /opt/local/etc/nginx/nginx.conf

If you want to launch nginx on system startup simply run the plist insalled by the port:
  1. sudo launchctl load -w /Library/LaunchDaemons/org.mackports.nginx.plist

TIP: start and stop nginx on demand from the command line, but don't do it yet! We have to configure it more first.

Start Nginx on demand:

  1. sudo nginx

Stop Nginx on demand:
  1. sudo nginx -s stop

Installing MySQL via MacPorts


I had a change of heart and decided to install mysql5 via MacPorts to keep it all under one package manager. If you've already installed MySQL using the package provided by dev.mysql.com then just skip ahead to Installing PHP with PHP-CGI. Otherwise keep reading.

Install MySQL via macports
  1. sudo port install mysql5-server
Once complete you'll need to start the mysql daemon and set a root password

  1. #start mysql daemon
  2. sudo mysqld_safe5 &
  1. sudo mysqladmin5 -u root password NEWPASSWORD

You will need to start mysql daemon whenever you wish to use it. I'd sugget making a shell command or an alias in your ~/.profile file.

  1. #~/.profile
  2. alias mysql='/opt/local/bin/mysql5'
  3. alias start_mysql='sudo mysqld_safe5 &'
  4. alias stop_mysql='mysqladmin5 --user=root --password=NEWPASSWORD shutdown'

Now you're all set with MySQL from MacPorts.

Installing PHP with PHP-CGI (FastCGI)

Like I said before, Lion comes with its own PHP install, but I much prefer to work with the package manager whenever possible, so I'm installing a fresh copy of PHP in /opt/local. Let's also install php5 with fastcgi (we need for nginx to talk to PHP) along with some of the nicer libraries we want access to for phpMyAdmin and mysql.

  1. sudo port install php5 +fastcgi fcgi php5-gd php5-mysql php5-mcrypt

Starting php-cgi:
  1. php-cgi -q -b 127.0.0.1:9000 &

Stopping PHP-CGI:
  1. sudo killall php-cgi

You'll need to start the php-cgi whenever you want nginx to talk to PHP via CGI daemon. I ended up writting a little bash to start this for me with a simple command and saved it to ~/bin/start_php_cgi.sh
  1. #!/bin/bash
  2. php-cgi -q -b 127.0.0.1:9000 &

Configure Nginx

Open up /opt/local/etc/nginx/nginx.conf in your favorite editor and configure it:


#user  nobody;
worker_processes  1;

error_log   /opt/local/etc/nginx/logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

pid        /opt/local/etc/nginx/logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /opt/local/etc/nginx/logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        access_log  /opt/local/etc/nginx/logs/host.access.log  main;

        location / {
            root   share/nginx/html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   share/nginx/html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        location ~ \.php$ {
            root           /opt/local/share/nginx/html;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  /opt/local/share/nginx/html$fastcgi_script_name;
            include        fastcgi_params;
        }

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   share/nginx/html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443;
    #    server_name  localhost;

    #    ssl                  on;
    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_timeout  5m;

    #    ssl_protocols  SSLv2 SSLv3 TLSv1;
    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers   on;

    #    location / {
    #        root   share/nginx/html;
    #        index  index.html index.htm;
    #    }
    #}

}

This is a basic nginx setup to run with PHP-CGI nicely. You'll have to add a little more to get it to work nicely with a CakePHP install, but we'll get to that soon enough. As you can see, I like to root things in /var/www, make sure this directory exists for you


  1. sudo mkdir /var/www
Lets write a testfile to see what we've got so far!
  1. sudo echo "" > /var/www/index.php


Start nginx, php-cgi and then navigate to localhost/index.php to see your hard work in all its glory.

Installing phpMyAdmin



Grab the latest version of phpMyAdmin from: http://www.phpmyadmin.net/home_page/downloads.php (version 3.4.7 when writing) and unpack it wherever you'd like (I put mine in /opt/local/phpMyAdmin).

Since I like to keep my route in /var/www as my default for nginx, I then created a symlink between where I installed phpMyAdmin and /var/www/phpMyAdmin

  1. sudo ln -s /opt/local/phpMyAdmin /var/www/phpMyAdmin


Now navigate to http://localhost/phpMyAdmin and you should be greeted with a nice phpMyAdmin screen.


Stop and start nginx:
  1. sudo nginx -s stop
  2. sudo nginx


Add your new server_name to your /private/etc/hosts file
  1. 127.0.0.1 localhost dev.testapp.devlocal


Now navigate to http://dev.testapp.devlocal on your machine and you should see your brand new baked CakePHP 2.0 app! Congratulations, happy baking!

I hope this tutorial helped, comments are appreciated.

Wednesday, August 15, 2012

Tạo ssl sử dụng https cho nginx

# Generate private key 
openssl genrsa -out ca.key 1024 

# Generate CSR 
openssl req -new -key ca.key -out ca.csr

# Generate Self Signed Key
openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.pem

# Copy the files to the correct locations
cp ca.pem /etc/pki/tls/certs
cp ca.key /etc/pki/tls/private/ca.key
cp ca.csr /etc/pki/tls/private/ca.csr
 
 
Cấu hình nginx sử dụng ssl https 
 
  server {
        listen       443;
        server_name  111.44.9.33;


        ssl                  on;
        ssl_certificate      /usr/share/nginx/html/cydia/ca.pem;
        ssl_certificate_key  /usr/share/nginx/html/cydia/ca.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv2 SSLv3 TLSv1;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers   on;

        location / {
            root   /usr/share/nginx/html;
            index  index.php index.html index.htm;
        }

        location ~ \.php$ {
            root           /usr/share/nginx/html;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  /usr/share/nginx/html$fastcgi_script_name;
            include        fastcgi_params;
        }

    }
 

Sunday, August 12, 2012

9 thói quen xấu cần bỏ nếu muốn theo ngành CNTT

1. Không chịu đọc tài liệu trước khi dùng:
Đây là một trong những thói quen tệ hại nhất nhưng lại thường gặp nhất. Có lẽ thói quen này nảy sinh từ tính thân thiện của “giao diện đồ hình” (GUI) khiến cho người dùng bồi đắp thói quen mò mẫm mà không cần đọc hướng dẫn nhưng cũng sử dụng được máy. Việc này không có gì đáng ngại đối với người dùng (rất) bình thường. Tuy nhiên, nếu bạn có ý định theo đuổi ngành CNTT một cách nghiêm túc thì hãy bỏ ngay thói quen tai hại này bởi vì đây là rào cản lớn nhất cho sự phát triển. Kiến thức vững chắc không phải… mò mà ra. Tài liệu hướng dẫn không phải vô cớ mà được viết ra.

2. Đọc lướt:
Đây cũng là một thói quen tệ hại và phổ biến không kém. Ngay trên những diễn đàn, với những ý kiến và chỉ dẫn bằng tiếng Việt rất cô đọng, rành mạch và dễ hiểu nhưng vẫn có quá nhiều người chỉ đọc lướt để rồi quay lại tiếp tục thắc mắc. Đây là thói quen cực kỳ nguy hiểm bởi vì nó rèn cho trí não thói quen đọc lướt. Việc này dẫn đến chỗ kiến thức thu thập một cách hời hợt, tạm bợ và chắp vá. Nếu những ý kiến bằng tiếng Việt rất cô đọng, rành mạch và dễ hiểu nhưng vẫn không chịu khó đọc kỹ và suy gẫm thì việc tham khảo, tổng hợp các sách tiếng nước ngoài gần như là vô khả thi.

3. Bắt chước mà không suy nghĩ:
Khi bắt đầu làm quen với những thứ trong ngành CNTT, cách dễ nhất là bắt chước làm theo từng bước. Nếu cứ nhắm mắt làm theo nhưng không hề suy nghĩ lý do tại sao mình làm như vậy, không thử đặt câu hỏi những gì xảy ra đằng sau những “bước” ấy thì không chóng thì chày sẽ tạo cho mình một thói quen tai hại: bắt chước không suy nghĩ không tư duy như một cỗ máy. Từ chỗ làm theo từng bước có sẵn mà không suy nghĩ đến chỗ biến thành thói quen thì khả năng nhận định và tư duy sẽ bị thui chột. Chẳng những vậy, thói quen này kiềm hãm sự thẩm thấu kiến thức xuyên qua hàng loạt những câu hỏi. Tự đặt câu hỏi chính là cách buộc trí não mình làm việc và là viên đá đầu tiên để dấn thân vào chỗ phát triển trí tụệ.

4. Sợ khó:
Sợ khó tưởng chừng quá thông thường trên mọi lãnh vực nhưng trong lãnh vực CNTT thì thói quen “sợ khó” là thói quen giết chết ngay bước đầu làm quen và phát triển. Chẳng có ngành nghề thực thụ, đòi hỏi trí tuệ mà lại dễ dàng hết. Thói quen “sợ khó” biểu hiện từ chuyện đơn giản như học ngoại ngữ (để có thể tham khảo thêm tài liệu ngoại ngữ) cho đến chuyện tự mình đối diện với những khó khăn trong khi trau dồi kiến thức và kinh nghiệm. Thói quen này lâu dần ăn sâu và dẫn đến chỗ không muốn và không thể giải quyết được điều gì nếu chỉ cảm thấy có trở ngại. Nên tránh xa câu này: vạn sự khởi đầu nan, gian nan bắt đầu nản.

5. Viện cớ:
Quá trình tích lũy kiến thức luôn luôn có những khó khăn và trở ngại. Nếu chính bản thân mình không tự kỷ luật và tự nghiêm khắc thì chẳng còn ai trên đời này kỷ luật và nghiêm khắc giúp mình. Từ chỗ không kỷ luật và không nghiêm khắc, chỉ cần một thời gian rất ngắn có thể dẫn đến sự đổ vỡ, sợ hãi, chán nản và để bào chữa cho sự đổ vỡ thường là những viện cớ. Viện cớ chỉ để ẩn nấp sau cái cớ nhưng sự thật sụp đổ vẫn tồn tại. Tránh xa những câu như “nhà em nghèo”, “hoàn cảnh khó khăn”, “vì em là newbie” mà nên biết rằng vô số những người khác cũng như mình và thậm chí còn khó khăn hơn mình. Nên nhớ rằng, ngay khi dùng cái cớ để viện thì lúc ấy mình đã chính thức thất bại rồi.

6. “Đi tắt đón đầu”:
Trên đời này chẳng có loại tri thức đích thực nào hình thành từ “đi tắt” và “đón đầu” cả. “Mì ăn liền” có cái ngon của nó nhưng chính “mì ăn liền” không thể hình thành một bữa ăn thịnh soạn và đầy đủ. Tri thức đích thực cũng như thức ăn, nó cần điều độ, liều lượng và thời gian để… tiêu hoá. Tư duy và thói quen “đi tắt” luôn luôn dẫn đến những lổ hổng khủng khiếp trong kiến thức. Những lổ hổng ấy xem chừng không nhiều và không quan trọng khi kiến thức còn ít ỏi và nhu cầu công việc còn sơ khai. Tuy nhiên, một khi đối diện với những khó khăn và phức tạp trong công việc và trong đời sống thì những thứ “đi tắt đón đầu” là nguyên nhân sâu xa của những đổ vỡ và thất bại. Hãy nhớ: đừng đi tắt và đừng đón đầu bởi vì chẳng có cái đường tắt nào trong hành trình đi tìm tri thức.

7. “Nghe nói là…”
Cụm “nghe nói là…” là một cụm phổ biến đến độ chóng mặt. Bất cứ một ngành khoa học hay có liên quan đến khoa học không thể dựa trên “nghe nói” mà luôn luôn cần dựa trên các bằng chứng khoa học và những bằng chứng ấy cần chính xác và cụ thể. Chính vì có thói quen “nghe nói” mà đánh rớt những cơ hội tìm tòi và kiểm chứng; những cơ hội quý báu để trau dồi kiến thức và kinh nghiệm. Cái gì không rõ thì nên tìm tòi và đừng “nghe nói” mà phải được thấy, được phân tích và được kiểm chứng. Không bỏ được thói quen này thì cách tốt nhất đừng bén mảng gần bất cứ ngành khoa học nào vì chỉ chuốc lấy sự thất bại và lãng phí.

8. Niềm tin và hy vọng:
Trong khoa học, khi nói đến kết quả và sự kiến tạo hoặc thậm chí con đường đi đến sự kiến tạo và kết quả thì hoàn toàn không có chỗ cho “niềm tin” và “hy vọng” một cách mù mờ. Thói quen “restart” lại máy hay “restart” lại chương trình với “hy vọng” nó sẽ khắc phục sự cố đã trở thành thói quen cố hữu. Nếu không có điều kiện thay đổi nào khác thì có “restart” một triệu lần và hy vọng một triệu lần thì kết quả vẫn y hệt nhau. Đừng “tin” và đừng “hy vọng” vào sự thay đổi của kết quả nếu như chính bạn không kiểm soát và thay đổi để tạo thay đổi trong kết quả. Tất cả mọi hoạt động từ lập trình cho đến quản lý hệ thống, quản lý mạng, bảo mật, reverse engineering…. thậm chí đối với người dùng bình thường, khi kết quả không như ý, sự điều chỉnh là điều cần thiết thay vì lặp lại y hệt hành động và chỉ… hy vọng.

9. Không vì trí tuệ mà vì… “đẳng cấp”:
Lắm bạn lao vào ngành này không phải là vì trí tuệ, vì kiến thức, vì đóng góp một cái gì đó ích lợi cho xã hội mà là vì… đẳng cấp mơ hồ nào đó. Nếu tiếp tục lao vào và chọn lấy một muc tiêu mơ hồ thì sẽ không bao giờ đi đến đích được. “Đẳng cấp” là một thứ mơ hồ, vô ích và đầy cá nhân tính nhưng khi nó biến thành thói quen và mục tiêu để nhắm tới thì nó chẳng mang lại được gì ngoài sự thất bại ngay từ đầu vì hoàn toàn không có một phương hướng nào cả. Trau dồi kiến thức hoàn toàn khác với việc xoa dịu mặc cảm (“đẳng cấp”).

Sưu tầm từ bài viết của anh Hoàng Ngọc Diêu (HVAONLINE)

Saturday, August 11, 2012

Sửa chữa file hệ thống linux : ext2 và ext3

Error disk 
EXT3-fs error (device sdb1): ext3_lookup: unlinked inode 77358041 in dir #77332481
EXT3-fs error (device sdb1): ext3_lookup: unlinked inode 77358040 in dir #77332481

1 - file hệ thống sẽ không thể được unmount ,một khi nó đang hoạt động (
thường là báo busy khi unmount) .Cho nên chúng ta cần phải hạ hệ thống xuống chạy ở runlevel là 1 và chắc chắn rằng bạn đang chạy lệnh với permission là root

Nếu không phải disk chứa OS có thể  umount trực tiếp và fix không cần đưa về init 1
# init 1

2 - Unmount disk đang bị lỗi, ví dụ là /home file system được mount tới /dev/sda3 (partion sda3 của hdd sda) ,khi đó chúng ta cần chạy lệnh :
# umount /home
hay
# umount /dev/sda3

3 - bây giờ chúng ta sẽ chạy fsck cho phân vùng này
# fsck /dev/sda3
Tuy nhiên nếu bạn biết chắc được kiểu định dạng của phần vùng này thì nên dùng tùy biến -t .
# fsck -t ext3 /dev/sda3
OR
# fsck.ext3 /dev/sda3
Chú ý : nếu bạn không nhớ kiểu file ,bạn có thể gõ lệnh mount để xác định ,nó sẽ liệt kê tất cả phân vùng đang được mount và kiểu ext .

Lệnh fsck sẽ kiểm tra file system ,nó sẽ báo khi có vấn đề xuất hiện và sẽ fix .Nó sẽ bắt bạn xác nhận y (yes) nếu sửa file bị lỗi .Trường hợp bị nhiều quá mà bạn lười gõ y từng file bạn có thể dùng tùy biến -y (mặc định sửa hết)
# fsck -y /dev/sda3
Các file không thể sữa chửa và phục hồi sẽ được lưu trữ trong thư mục /home/lost+found

4 - Sau khi fsck xong ,remount lại file hệ thống
# mount /home

5 - Quay lại chế độ multiuser mode
# init 3
Bạn có thể xem man của fsck để biết thêm về các tùy biến khác .Chú ý /dev/sda3 chỉ là ví dụ ,bạn cần xác định đúng phân vùng và ổ cứng bị lỗi .
Chúc các bạn thành công 

[root@ns ~]# fsck -t ext3 /dev/sdb1
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
/dev/sdb1: recovering journal
/dev/sdb1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inode 77358037, i_size is 16711680, should be 16826368.  Fix? yes

Inode 77358037, i_blocks is 32680, should be 32904.  Fix? yes 
Pass 2: Checking directory structure
Entry '363969661+1.1.3+ESPGALUDA II+AP.ipa' in /store/datanew (77332481) has deleted/unused inode 77358040.  Clear? yes

Entry '387176580+1.1.3+Dodonpachi Resurrection+AP.ipa' in /store/datanew (77332481) has deleted/unused inode 77358041.  Clear? yes

Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences:  -(338821634--338821639) -(338829533--338829535) -338832079 -(338835793--338835799) -(338843626--338843631) -(338852609--338852615) +(338867427--338867431) +(338867922--338867927) +(338869180--338869195) +338869212
Fix? yes

Free blocks count wrong for group #10341 (17487, counted=17459).
Fix? yes

Free blocks count wrong (232484352, counted=232484324).
Fix? yes


/dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb1: 27364/244203520 files (43.3% non-contiguous), 255893676/488378000 blocks
[root@ns ~]#


Wednesday, August 1, 2012

zpool tạo group disk

bash-3.2# echo |format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c0t203400A0B84700CEd31
          /pci@1c,600000/SUNW,emlxs@1/fp@0,0/ssd@w203400a0b84700ce,1f
       1. c1t203500A0B84700CEd31
          /pci@1d,700000/SUNW,emlxs@1/fp@0,0/ssd@w203500a0b84700ce,1f
       2. c3t0d0
          /pci@1f,700000/scsi@2/sd@0,0
       3. c3t1d0
          /pci@1f,700000/scsi@2/sd@1,0
       4. c3t2d0
          /pci@1f,700000/scsi@2/sd@2,0
       5. c3t3d0
          /pci@1f,700000/scsi@2/sd@3,0
       6. c5t600A0B8000330D0A0000054748BF696Bd0
          /scsi_vhci/ssd@g600a0b8000330d0a0000054748bf696b
       7. c5t600A0B80004700CE0000057348BF6846d0
          /scsi_vhci/ssd@g600a0b80004700ce0000057348bf6846
       8. c5t600A0B80004700CE0000057748BF6AD0d0
          /scsi_vhci/ssd@g600a0b80004700ce0000057748bf6ad0
Specify disk (enter its number): Specify disk (enter its number):  


1. Tạo group disk

bash-3.2# zpool create -f u02 c5t600A0B8000330D0A0000054748BF696Bd0 c5t600A0B80004700CE0000057348BF6846d0


bash-3.2# zpool get all u02;
NAME  PROPERTY       VALUE       SOURCE
u02   size           1.84T       -
u02   capacity       0%          -
u02   altroot        -           default
u02   health         ONLINE      -
u02   guid           1890458593104236169  default
u02   version        29          default
u02   bootfs         -           default
u02   delegation     on          default
u02   autoreplace    off         default
u02   cachefile      -           default
u02   failmode       wait        default
u02   listsnapshots  on          default
u02   autoexpand     off         default
u02   free           1.84T       -
u02   allocated      252K        -
u02   readonly       off         -

2. Unmount zpool 
bash-3.2# zpool export u02

Khi map volume này vào server mới

Muốn sử dụng lại 2 volume này
bash-3.2# # zpool import
  pool: u02
    id: 1890458593104236169
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        u02                                      ONLINE
          c4t600A0B8000330D0A0000054748BF696Bd0  ONLINE
          c4t600A0B80004700CE0000057348BF6846d0  ONLINE

3. mount u02 trước đó vào testzfs

bash-3.2# zpool import u02 testzfs

Chú ý: Trong trường hợp chỉ map lên một volume thì không thể mount lên được

root@TT4-SMS-S # zpool import
  pool: testzfs
    id: 1890458593104236169
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

        testzfs                                  UNAVAIL  missing device
          c4t600A0B8000330D0A0000054748BF696Bd0  ONLINE

        Additional devices are known to be part of this pool, though their
        exact configuration cannot be determined.
root@TT4-SMS-S # zpool import testzfs
cannot import 'testzfs': one or more devices is currently unavailable

Tuesday, July 31, 2012

Cấu hình Multi-path cho FC card

-    Việc cấu hình Multi-path cho FC card trên server giúp việc truy cập data trên server sẽ an toàn hơn trong trường hợp một trong 2 đường kết nối từ FC card trên server bị disconnect.
-    Cấu hình multi-path cho FC card bằng lệnh sau:

# stmsboot -e

WARNING: stmsboot operates on each supported multipath-capable controller detected in a host. In your system, these controllers are
/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@1/fp@0,0
/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@2/fp@0,0
/pci@780/pci@0/pci@9/scsi@0
If you do NOT wish to operate on these controllers, please quit stmsboot and re-invoke with -D { fp | mpt | mpt_sas} to specify which controllers you wish to modify your multipathing configuration for.
Do you wish to continue? [y/n] (default: y)
-    Việc cấu hình này cần reboot lại server để tính năng Multi-path được apply.

-    Kiểm tra cấu hình Multi-path:

# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
  Node WWN:200400a0b84700ce  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t201400A0B84700CEd31s2
  Node WWN:200400a0b84700ce  Device Type:Disk device
    Logical Path:/dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
#
#
# luxadm display 

/dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
  Vendor:               STK   
  Product ID:           FLEXLINE 380  
  Revision:             0660
  Serial Num:           SP74542576    
  Unformatted capacity: 512000.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x3
    Maximum prefetch:   0x3
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c4t600A0B8000330D0A00001A964ED581DDd0s2
  /devices/scsi_vhci/ssd@g600a0b8000330d0a00001a964ed581dd:c,raw
   Controller           /devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@2/fp@0,0
    Device Address              201400a0b84700ce,0
    Host controller port WWN    10000000c9c25d7d
    Class                       secondary
    State                       STANDBY
   Controller           /devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,emlxs@1/fp@0,0
    Device Address              201500a0b84700ce,0
    Host controller port WWN    10000000c9c25de9
    Class                       primary
    State                       ONLINE

#
-    Từ kết quả trên, ta dễ dàng nhận thấy rằng volume được control bằng 2 đường FC, với trạng thái “Primary” là “Online” và “secondary” là “Standby”.

Running an SSH Server on Multiple Ports

It's pretty easy to do on your Linux box. These instructions are tested on OpenSuse 10.1 but they should work equally well on any Linux. On the machine that's running sshd, the ssh server, edit /etc/ssh/sshd_config. In it you'll see one directive on each line. Here's a snippet:
#AllowTcpForwarding yes
GatewayPorts yes
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
In these lines, the ones that start with a # don't do anything - they're comments for your reference. Often sshd_config has default values for many of the most common options included with a # in front of them. So you might have a line like
#Port 22
With the # it doesn't do anything. Since 22 is the default value for Port, sshd will behave the same if you have no Port directive at all or if you have this comment.
The lines that have no # in front of them are directives. They tell sshd what you want it to do for any given option. So a line like
Port 22
Tells sshd to listen for connections on Port 22. The ssh server accepts multiple Port directives and will listen on multiple ports if you want it to. If you want to have sshd listen on ports 22, 80 and 8122 you need lines like this
Port 22
Port 80
Port 32022
Note that Port 80 is normally used by web servers - it's said to be a Well Known Port Number. Using Port 80 for ssh will let you use ssh to connect through most firewalls and proxies. If you decide to do this then make sure that you don't also have a web server trying to use port 80 for incoming connections. Port 32022 isn't reserved for anything (as far as I know) but a random hacker wouldn't connect to it as their first try for an ssh connection. Port numbers go up to sixty-something thousand.
After you edit sshd_config and save it, you have to restart the ssh server in order for your changes to take effect. If you're making the changes while logged in on an ssh shell (i.e. somewhere other than in front of the computer running sshd) be aware that you may lose your connection when you restart (you should also to the end of this post before restarting). I restart sshd like this:
ruby:/etc/ssh # /etc/init.d/sshd restart
Shutting down SSH daemon                                              done
Starting SSH daemon                                                   done
Once you've made the change and restarted, test your new configuration either from the console or another machine on your LAN. Supposing you used port 32022 you could test it locally like this:

Managing ZFS Storage Pool Properties

You can use the zpool get command to display pool property information. For example:


# zpool get all u03
NAME  PROPERTY       VALUE       SOURCE
u03  size           68G         -
u03  capacity       0%          -
u03  altroot        -           default
u03  health         ONLINE      -
u03  guid           601891032394735745  default
u03  version        22          default
u03  bootfs         -           default
u03  delegation     on          default
u03  autoreplace    off         default
u03  cachefile      -           default
u03  failmode       wait        default
u03  listsnapshots  on          default
u03  autoexpand     off         default
u03  free           68.0G       -
u03  allocated      76.5K       -
Storage pool properties can be set with the zpool set command. For example:

# zpool set autoreplace=on mpool
# zpool get autoreplace mpool
NAME  PROPERTY     VALUE    SOURCE
mpool autoreplace  on       default
Table 4–1 ZFS Pool Property Descriptions
Property NameTypeDefault ValueDescription
allocated StringN/ARead-only value that identifies the amount of storage space within the pool that has been physically allocated.
altroot Stringoff Identifies an alternate root directory. If set, this directory is prepended to any mount points within the pool. This property can be used when you are examining an unknown pool, if the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid.
autoreplace Booleanoff Controls automatic device replacement. If set to off, device replacement must be initiated by using the zpool replace command. If set to on, any new device found in the same physical location as a device that previously belonged to the pool is automatically formatted and replaced. The property abbreviation is replace.
bootfs BooleanN/AIdentifies the default bootable dataset for the root pool. This property is typically set by the installation and upgrade programs.
cachefile StringN/AControls where pool configuration information is cached. All pools in the cache are automatically imported when the system boots. However, installation and clustering environments might require this information to be cached in a different location so that pools are not automatically imported. You can set this property to cache pool configuration information in a different location. This information can be imported later by using the zpool import -c command. For most ZFS configurations, this property is not used.
capacity NumberN/ARead-only value that identifies the percentage of pool space used.
The property abbreviation is cap.
delegation Booleanon Controls whether a nonprivileged user can be granted access permissions that are defined for a dataset. For more information, see Chapter 9, Oracle Solaris ZFS Delegated Administration.
failmode Stringwait Controls the system behavior if a catastrophic pool failure occurs. This condition is typically a result of a loss of connectivity to the underlying storage device or devices or a failure of all devices within the pool. The behavior of such an event is determined by one of the following values:
  • wait – Blocks all I/O requests to the pool until device connectivity is restored, and the errors are cleared by using the zpool clear command. In this state, I/O operations to the pool are blocked, but read operations might succeed. A pool remains in the wait state until the device issue is resolved.
  • continue – Returns an EIO error to any new write I/O requests, but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk are blocked. After the device is reconnected or replaced, the errors must be cleared with the zpool clear command.
  • panic – Prints a message to the console and generates a system crash dump.
free StringN/ARead-only value that identifies the number of blocks within the pool that are not allocated.
guid StringN/ARead-only property that identifies the unique identifier for the pool.
health StringN/ARead-only property that identifies the current health of the pool, as either ONLINE, DEGRADED, FAULTED, OFFLINE, REMOVED, or UNAVAIL.
listsnapshots Stringon Controls whether snapshot information that is associated with this pool is displayed with the zfs list command. If this property is disabled, snapshot information can be displayed with the zfs list -t snapshot command.
size NumberN/ARead-only property that identifies the total size of the storage pool.
version NumberN/AIdentifies the current on-disk version of the pool. The preferred method of updating pools is with the zpool upgrade command, although this property can be used when a specific version is needed for backwards compatibility. This property can be set to any number between 1 and the current version reported by the zpool upgrade -v command.

Displaying Information About ZFS Storage Pools

You can use the zpool list command to display basic information about pools.

Listing Information About All Storage Pools or a Specific Pool

With no arguments, the zpool listcommand displays the following information for all pools on the system:

# zpool list
NAME                    SIZE    ALLOC   FREE    CAP  HEALTH     ALTROOT
tank                   80.0G   22.3G   47.7G    28%  ONLINE     -
dozer                   1.2T    384G    816G    32%  ONLINE     -
This command output displays the following information:
NAME
The name of the pool.
SIZE
The total size of the pool, equal to the sum of the sizes of all top-level virtual devices.
ALLOC
The amount of physical space allocated to all datasets and internal metadata. Note that this amount differs from the amount of disk space as reported at the file system level.
For more information about determining available file system space, see ZFS Disk Space Accounting.
FREE
The amount of unallocated space in the pool.
CAP (CAPACITY)
The amount of disk space used, expressed as a percentage of the total disk space.
HEALTH
The current health status of the pool.
For more information about pool health, see Determining the Health Status of ZFS Storage Pools.
ALTROOT
The alternate root of the pool, if one exists.
For more information about alternate root pools, see Using ZFS Alternate Root Pools.
You can also gather statistics for a specific pool by specifying the pool name. For example:

# zpool list tank
NAME                    SIZE    ALLOC   FREE    CAP   HEALTH     ALTROOT
tank                   80.0G    22.3G   47.7G    28%  ONLINE     -

Listing Specific Storage Pool Statistics

Specific statistics can be requested by using the -o option. This option provides custom reports or a quick way to list pertinent information. For example, to list only the name and size of each pool, you use the following syntax:

# zpool list -o name,size
NAME                    SIZE
tank                   80.0G
dozer                   1.2T
The column names correspond to the properties that are listed in Listing Information About All Storage Pools or a Specific Pool.

Scripting ZFS Storage Pool Output

The default output for the zpool list command is designed for readability and is not easy to use as part of a shell script. To aid programmatic uses of the command, the -H option can be used to suppress the column headings and separate fields by tabs, rather than by spaces. For example, to request a list of all pool names on the system, you would use the following syntax:

# zpool list -Ho name
tank
dozer
Here is another example:

# zpool list -H -o name,size
tank   80.0G
dozer  1.2T

Displaying ZFS Storage Pool Command History

ZFS automatically logs successful zfs and zpool commands that modify pool state information. This information can be displayed by using the zpool history command.
For example, the following syntax displays the command output for the root pool:

# zpool history
History for 'rpool':
2010-05-11.10:18:54 zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/tmp/root/etc/zfs/zpool.cache rpool mirror c1t0d0s0 c1t1d0s0
2010-05-11.10:18:55 zfs set canmount=noauto rpool
2010-05-11.10:18:55 zfs set mountpoint=/rpool rpool
2010-05-11.10:18:56 zfs create -o mountpoint=legacy rpool/ROOT
2010-05-11.10:18:57 zfs create -b 8192 -V 2048m rpool/swap
2010-05-11.10:18:58 zfs create -b 131072 -V 1536m rpool/dump
2010-05-11.10:19:01 zfs create -o canmount=noauto rpool/ROOT/zfsBE
2010-05-11.10:19:02 zpool set bootfs=rpool/ROOT/zfsBE rpool
2010-05-11.10:19:02 zfs set mountpoint=/ rpool/ROOT/zfsBE
2010-05-11.10:19:03 zfs set canmount=on rpool
2010-05-11.10:19:04 zfs create -o mountpoint=/export rpool/export
2010-05-11.10:19:05 zfs create rpool/export/home
2010-05-11.11:11:10 zpool set bootfs=rpool rpool
2010-05-11.11:11:10 zpool set bootfs=rpool/ROOT/zfsBE rpool
You can use similar output on your system to identify the actual ZFS commands that were executed to troubleshoot an error condition.
The features of the history log are as follows:
  • The log cannot be disabled.
  • The log is saved persistently on disk, which means that the log is saved across system reboots.
  • The log is implemented as a ring buffer. The minimum size is 128 KB. The maximum size is 32 MB.
  • For smaller pools, the maximum size is capped at 1 percent of the pool size, where the size is determined at pool creation time.
  • The log requires no administration, which means that tuning the size of the log or changing the location of the log is unnecessary.
To identify the command history of a specific storage pool, use syntax similar to the following:

# zpool history tank
History for 'tank':
2010-05-13.14:13:15 zpool create tank mirror c1t2d0 c1t3d0
2010-05-13.14:21:19 zfs create tank/snaps
2010-05-14.08:10:29 zfs create tank/ws01
2010-05-14.08:10:54 zfs snapshot tank/ws01@now
2010-05-14.08:11:05 zfs clone tank/ws01@now tank/ws01bugfix
Use the -l option to display a long format that includes the user name, the host name, and the zone in which the operation was performed. For example:

# zpool history -l tank
History for 'tank':
2010-05-13.14:13:15 zpool create tank mirror c1t2d0 c1t3d0 [user root on neo]
2010-05-13.14:21:19 zfs create tank/snaps [user root on neo]
2010-05-14.08:10:29 zfs create tank/ws01 [user root on neo]
2010-05-14.08:10:54 zfs snapshot tank/ws01@now [user root on neo]
2010-05-14.08:11:05 zfs clone tank/ws01@now tank/ws01bugfix [user root on neo]
Use the -i option to display internal event information that can be used for diagnostic purposes. For example:

# zpool history -i tank
2010-05-13.14:13:15 zpool create -f tank mirror c1t2d0 c1t23d0
2010-05-13.14:13:45 [internal pool create txg:6] pool spa 19; zfs spa 19; zpl 4;...
2010-05-13.14:21:19 zfs create tank/snaps
2010-05-13.14:22:02 [internal replay_inc_sync txg:20451] dataset = 41
2010-05-13.14:25:25 [internal snapshot txg:20480] dataset = 52
2010-05-13.14:25:25 [internal destroy_begin_sync txg:20481] dataset = 41
2010-05-13.14:25:26 [internal destroy txg:20488] dataset = 41
2010-05-13.14:25:26 [internal reservation set txg:20488] 0 dataset = 0
2010-05-14.08:10:29 zfs create tank/ws01
2010-05-14.08:10:54 [internal snapshot txg:53992] dataset = 42
2010-05-14.08:10:54 zfs snapshot tank/ws01@now
2010-05-14.08:11:04 [internal create txg:53994] dataset = 58
2010-05-14.08:11:05 zfs clone tank/ws01@now tank/ws01bugfix

Viewing I/O Statistics for ZFS Storage Pools

To request I/O statistics for a pool or specific virtual devices, use the zpool iostat command. Similar to the iostat command, this command can display a static snapshot of all I/O activity, as well as updated statistics for every specified interval. The following statistics are reported:
alloc capacity
The amount of data currently stored in the pool or device. This amount differs from the amount of disk space available to actual file systems by a small margin due to internal implementation details.
For more information about the differences between pool space and dataset space, see ZFS Disk Space Accounting.
free capacity
The amount of disk space available in the pool or device. As with the used statistic, this amount differs from the amount of disk space available to datasets by a small margin.
read operations
The number of read I/O operations sent to the pool or device, including metadata requests.
write operations
The number of write I/O operations sent to the pool or device.
read bandwidth
The bandwidth of all read operations (including metadata), expressed as units per second.
write bandwidth
The bandwidth of all write operations, expressed as units per second.

Listing Pool-Wide I/O Statistics

With no options, the zpool iostat command displays the accumulated statistics since boot for all pools on the system. For example:

# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       6.05G  61.9G      0      0    786    107
tank        31.3G  36.7G      4      1   296K  86.1K
----------  -----  -----  -----  -----  -----  -----
Because these statistics are cumulative since boot, bandwidth might appear low if the pool is relatively idle. You can request a more accurate view of current bandwidth usage by specifying an interval. For example:

# zpool iostat tank 2
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        18.5G  49.5G      0    187      0  23.3M
tank        18.5G  49.5G      0    464      0  57.7M
tank        18.5G  49.5G      0    457      0  56.6M
tank        18.8G  49.2G      0    435      0  51.3M
In this example, the command displays usage statistics for the pool tank every two seconds until you type Control-C. Alternately, you can specify an additional count argument, which causes the command to terminate after the specified number of iterations. For example, zpool iostat 2 3 would print a summary every two seconds for three iterations, for a total of six seconds. If there is only a single pool, then the statistics are displayed on consecutive lines. If more than one pool exists, then an additional dashed line delineates each iteration to provide visual separation.

Listing Virtual Device I/O Statistics

In addition to pool-wide I/O statistics, the zpool iostat command can display I/O statistics for virtual devices. This command can be used to identify abnormally slow devices or to observe the distribution of I/O generated by ZFS. To request the complete virtual device layout as well as all I/O statistics, use the zpool iostat -v command. For example:

# zpool iostat -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       6.05G  61.9G      0      0    785    107
  mirror    6.05G  61.9G      0      0    785    107
    c1t0d0s0    -      -      0      0    578    109
    c1t1d0s0    -      -      0      0    595    109
----------  -----  -----  -----  -----  -----  -----
tank        36.5G  31.5G      4      1   295K   146K
  mirror    36.5G  31.5G    126     45  8.13M  4.01M
    c1t2d0      -      -      0      3   100K   386K
    c1t3d0      -      -      0      3   104K   386K
----------  -----  -----  -----  -----  -----  -----
Note two important points when viewing I/O statistics for virtual devices:
  • First, disk space usage statistics are only available for top-level virtual devices. The way in which disk space is allocated among mirror and RAID-Z virtual devices is particular to the implementation and not easily expressed as a single number.
  • Second, the numbers might not add up exactly as you would expect them to. In particular, operations across RAID-Z and mirrored devices will not be exactly equal. This difference is particularly noticeable immediately after a pool is created, as a significant amount of I/O is done directly to the disks as part of pool creation, which is not accounted for at the mirror level. Over time, these numbers gradually equalize. However, broken, unresponsive, or offline devices can affect this symmetry as well.
You can use the same set of options (interval and count) when examining virtual device statistics.

Determining the Health Status of ZFS Storage Pools

ZFS provides an integrated method of examining pool and device health. The health of a pool is determined from the state of all its devices. This state information is displayed by using the zpool status command. In addition, potential pool and device failures are reported by fmd, displayed on the system console, and logged in the /var/adm/messages file.
This section describes how to determine pool and device health. This chapter does not document how to repair or recover from unhealthy pools. For more information about troubleshooting and data recovery, see Chapter 11, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
Each device can fall into one of the following states:
ONLINE
The device or virtual device is in normal working order. Although some transient errors might still occur, the device is otherwise in working order.
DEGRADED
The virtual device has experienced a failure but can still function. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. The fault tolerance of the pool might be compromised, as a subsequent fault in another device might be unrecoverable.
FAULTED
The device or virtual device is completely inaccessible. This status typically indicates total failure of the device, such that ZFS is incapable of sending data to it or receiving data from it. If a top-level virtual device is in this state, then the pool is completely inaccessible.
OFFLINE
The device has been explicitly taken offline by the administrator.
UNAVAIL
The device or virtual device cannot be opened. In some cases, pools with UNAVAIL devices appear in DEGRADED mode. If a top-level virtual device is UNAVAIL, then nothing in the pool can be accessed.
REMOVED
The device was physically removed while the system was running. Device removal detection is hardware-dependent and might not be supported on all platforms.
The health of a pool is determined from the health of all its top-level virtual devices. If all virtual devices are ONLINE, then the pool is also ONLINE. If any one of the virtual devices is DEGRADED or UNAVAIL, then the pool is also DEGRADED. If a top-level virtual device is FAULTED or OFFLINE, then the pool is also FAULTED. A pool in the FAULTED state is completely inaccessible. No data can be recovered until the necessary devices are attached or repaired. A pool in the DEGRADED state continues to run, but you might not achieve the same level of data redundancy or data throughput than if the pool were online.

Basic Storage Pool Health Status

You can quickly review pool health status by using the zpool status command as follows:

# zpool status -x
all pools are healthy
Specific pools can be examined by specifying a pool name in the command syntax. Any pool that is not in the ONLINE state should be investigated for potential problems, as described in the next section.

Detailed Health Status

You can request a more detailed health summary status by using the -v option. For example:

# zpool status -v tank
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: scrub completed after 0h0m with 0 errors on Wed Jan 20 15:13:59 2010
config:

        NAME        STATE     READ WRITE CKSUM
        tank        DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  UNAVAIL      0     0     0  cannot open

errors: No known data errors
This output displays a complete description of why the pool is in its current state, including a readable description of the problem and a link to a knowledge article for more information. Each knowledge article provides up-to-date information about the best way to recover from your current problem. Using the detailed configuration information, you can determine which device is damaged and how to repair the pool.
In the preceding example, the faulted device should be replaced. After the device is replaced, use the zpool online command to bring the device online. For example:

# zpool online tank c1t0d0
Bringing device c1t0d0 online
# zpool status -x
all pools are healthy
If the autoreplace property is on, you might not have to online the replaced device.
If a pool has an offline device, the command output identifies the problem pool. For example:

# zpool status -x
  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 15:15:09 2010
config:

        NAME        STATE     READ WRITE CKSUM
        tank        DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  OFFLINE      0     0     0  48K resilvered

errors: No known data errors
The READ and WRITE columns provide a count of I/O errors that occurred on the device, while the CKSUM column provides a count of uncorrectable checksum errors that occurred on the device. Both error counts indicate a potential device failure, and some corrective action is needed. If non-zero errors are reported for a top-level virtual device, portions of your data might have become inaccessible.
The errors: field identifies any known data errors.
In the preceding example output, the offline device is not causing data errors.
For more information about diagnosing and repairing faulted pools and data, see Chapter 11, Oracle Solaris ZFS Troubleshooting and Pool Recovery.

Preparing for ZFS Storage Pool Migration

Storage pools should be explicitly exported to indicate that they are ready to be migrated. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all information about the pool from the system.
If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear faulted on the original system because the devices are no longer present. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.

Exporting a ZFS Storage Pool

To export a pool, use the zpool export command. For example:

# zpool export tank
The command attempts to unmount any mounted file systems within the pool before continuing. If any of the file systems fail to unmount, you can forcefully unmount them by using the -f option. For example:

# zpool export tank
cannot unmount '/export/home/eschrock': Device busy
# zpool export -f tank
After this command is executed, the pool tank is no longer visible on the system.
If devices are unavailable at the time of export, the devices cannot be identified as cleanly exported. If one of these devices is later attached to a system without any of the working devices, it appears as “potentially active.”
If ZFS volumes are in use in the pool, the pool cannot be exported, even with the -f option. To export a pool with a ZFS volume, first ensure that all consumers of the volume are no longer active.
For more information about ZFS volumes, see ZFS Volumes.

Determining Available Storage Pools to Import

After the pool has been removed from the system (either through an explicit export or by forcefully removing the devices), you can attach the devices to the target system. ZFS can handle some situations in which only some of the devices are available, but a successful pool migration depends on the overall health of the devices. In addition, the devices do not necessarily have to be attached under the same device name. ZFS detects any moved or renamed devices, and adjusts the configuration appropriately. To discover available pools, run the zpool import command with no options. For example:

# zpool import
 pool: tank
    id: 11809215114195894163
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          mirror-0  ONLINE
            c1t0d0  ONLINE
            c1t1d0  ONLINE
In this example, the pool tank is available to be imported on the target system. Each pool is identified by a name as well as a unique numeric identifier. If multiple pools with the same name are available to import, you can use the numeric identifier to distinguish between them.
Similar to the zpool status command output, the zpool import output includes a link to a knowledge article with the most up-to-date information regarding repair procedures for the problem that is preventing a pool from being imported. In this case, the user can force the pool to be imported. However, importing a pool that is currently in use by another system over a storage network can result in data corruption and panics as both systems attempt to write to the same storage. If some devices in the pool are not available but sufficient redundant data exists to provide a usable pool, the pool appears in the DEGRADED state. For example:

# zpool import
  pool: tank
    id: 11809215114195894163
 state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-2Q
config:

        NAME        STATE     READ WRITE CKSUM
        tank        DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c1t0d0  UNAVAIL      0     0     0  cannot open
            c1t3d0  ONLINE       0     0     0
In this example, the first disk is damaged or missing, though you can still import the pool because the mirrored data is still accessible. If too many faulted or missing devices are present, the pool cannot be imported. For example:

# zpool import
  pool: dozer
    id: 9784486589352144634
 state: FAULTED
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:
        raidz1-0       FAULTED
          c1t0d0       ONLINE
          c1t1d0       FAULTED
          c1t2d0       ONLINE
          c1t3d0       FAULTED
In this example, two disks are missing from a RAID-Z virtual device, which means that sufficient redundant data is not available to reconstruct the pool. In some cases, not enough devices are present to determine the complete configuration. In this case, ZFS cannot determine what other devices were part of the pool, though ZFS does report as much information as possible about the situation. For example:

# zpool import
pool: dozer
    id: 9784486589352144634
 state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:
        dozer          FAULTED   missing device
          raidz1-0     ONLINE
            c1t0d0     ONLINE
            c1t1d0     ONLINE
            c1t2d0     ONLINE
            c1t3d0     ONLINE
Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.

Importing ZFS Storage Pools From Alternate Directories

By default, the zpool import command only searches devices within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the -d option to search alternate directories. For example:

# zpool create dozer mirror /file/a /file/b
# zpool export dozer
# zpool import -d /file
  pool: dozer
    id: 7318163511366751416
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        dozer        ONLINE
          mirror-0   ONLINE
            /file/a  ONLINE
            /file/b  ONLINE
# zpool import -d /file dozer
If devices exist in multiple directories, you can specify multiple -d options.

Importing ZFS Storage Pools

After a pool has been identified for import, you can import it by specifying the name of the pool or its numeric identifier as an argument to the zpool import command. For example:

# zpool import tank
If multiple available pools have the same name, you must specify which pool to import by using the numeric identifier. For example:

# zpool import
  pool: dozer
    id: 2704475622193776801
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        dozer       ONLINE
          c1t9d0    ONLINE

  pool: dozer
    id: 6223921996155991199
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        dozer       ONLINE
          c1t8d0    ONLINE
# zpool import dozer
cannot import 'dozer': more than one matching pool
import by numeric ID instead
# zpool import 6223921996155991199
If the pool name conflicts with an existing pool name, you can import the pool under a different name. For example:

# zpool import dozer zeepool
This command imports the exported pool dozer using the new name zeepool.
If the pool was not cleanly exported, ZFS requires the -f flag to prevent users from accidentally importing a pool that is still in use on another system. For example:

# zpool import dozer
cannot import 'dozer': pool may be in use on another system
use '-f' to import anyway
# zpool import -f dozer

Note – Do not attempt to import a pool that is active on one system to another system. ZFS is not a native cluster, distributed, or parallel file system and cannot provide concurrent access from multiple, different hosts.

Pools can also be imported under an alternate root by using the -R option. For more information on alternate root pools, see Using ZFS Alternate Root Pools.

Recovering Destroyed ZFS Storage Pools

You can use the zpool import -D command to recover a storage pool that has been destroyed. For example:

# zpool destroy tank
# zpool import -D
  pool: tank
    id: 5154272182900538157
 state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          mirror-0  ONLINE
            c1t0d0  ONLINE
            c1t1d0  ONLINE
In this zpool import output, you can identify the tank pool as the destroyed pool because of the following state information:

state: ONLINE (DESTROYED)
To recover the destroyed pool, run the zpool import -D command again with the pool to be recovered. For example:

# zpool import -D tank
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE
          mirror-0  ONLINE
            c1t0d0  ONLINE
            c1t1d0  ONLINE

errors: No known data errors
If one of the devices in the destroyed pool is faulted or unavailable, you might be able to recover the destroyed pool anyway by including the -f option. In this scenario, you would import the degraded pool and then attempt to fix the device failure. For example:

# zpool destroy dozer
# zpool import -D
pool: dozer
    id: 13643595538644303788
 state: DEGRADED (DESTROYED)
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
config:

        NAME         STATE     READ WRITE CKSUM
        dozer        DEGRADED     0     0     0
          raidz2-0   DEGRADED     0     0     0
            c2t8d0   ONLINE       0     0     0
            c2t9d0   ONLINE       0     0     0
            c2t10d0  ONLINE       0     0     0
            c2t11d0  UNAVAIL      0    35     1  cannot open
            c2t12d0  ONLINE       0     0     0

errors: No known data errors
# zpool import -Df dozer
# zpool status -x
  pool: dozer
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: scrub completed after 0h0m with 0 errors on Thu Jan 21 15:38:48 2010
config:

        NAME         STATE     READ WRITE CKSUM
        dozer        DEGRADED     0     0     0
          raidz2-0   DEGRADED     0     0     0
            c2t8d0   ONLINE       0     0     0
            c2t9d0   ONLINE       0     0     0
            c2t10d0  ONLINE       0     0     0
            c2t11d0  UNAVAIL      0    37     0  cannot open
            c2t12d0  ONLINE       0     0     0

errors: No known data errors
# zpool online dozer c2t11d0
Bringing device c2t11d0 online
# zpool status -x
all pools are healthy