Friday 3 June 2011

XDISKUSAGE

Xdiskusage:                 It is an graphical tool for displaying the disk usage in Linux.

Package:                 xdiskusage-*.tar.gz

Dependency packages:                 fltk-*.source.gz

Usage:      First install the fltk-*.source.gz as follow
                 ./configure
                 make
                 make install

      Second we unzip the xdiskusage-*.tar.gz
                 go to usr/bin/
                 ./xdiskusage

xdiskusage is a console application which is used to display the usage of disk
in system as an graphical manner in Linux Platform.

Turn Your Computer ON Automatically

          You need to enter BIOS setup using the DEL or F12 or F2 key.
Step 1 - Next, take a look at “Power Management Setup“
Step 2 - Make sure you have the settings for “Pwron / Resume by Alarm“.
         You will need to enable this feature if its not enabled.
Step 3 - From there you can set the time you want your computer to power
on.
     Once you pick
                  Date/time or time of day,
   ➢
                  Simply save your settings
   ➢
                  and Exit to Reboot Your Computer.
   ➢

Install and Configure of Sun Grid Engine

Introduction:      The SUN Grid Engine is a batch system.
      Users submit jobs which are placed in queues and the jobs are
      then executed, depending on the system load,the opening hours of the queues and the job priority.


Queues:
The system has several queues defined but for normal usage only
two are open, one for MPI-jobs and one for multi threaded/serial
jobs.

Job suspension:
If a job is still running and the queue closes, the system will suspend
the job until the queue opens again.


Deploying Sun Grid Engine (SGE):
Read the SGE binary license at:
http://gridengine.sunsource.net/project/gridengine/clickthru60.html
It is important that you download these versions of SGE or later
versions. Earlier versions will not work with the Globus GRAM WS.
The two tarballs that you need to download are these files or more
recent versions with similar naming convensions:
               sge-6.0u7_1-bin-lx24-x86.tar.gz(or latest)
               sge-6.0u7-common.tar.gz(or latest)
                Unpacking the SGE distribution

After downloading the tarballs create a directory that will serve as the SGE directory. You can do this as the root user:
[server node]# mkdir -p /opt/sge-root
        Change into that directory:
[server node]# cd /opt/sge-root/

Now run the following commands as user root to unpack the tarballs into the directory you created. Change the path to the tarballs as is necessary:
[server node]# gzip -dc /root/sge-6.0u7-common.tar.gz | tar xvpf -

or             tar -zxvf x.tar

[server node]# gzip -dc /root/sge-6.0u7_1-bin-lx24-x86.tar.gz | tar
xvpf -

or             tar -zxvf x.tar

Next you need to set the environment variable SGE_ROOT to point to the directory you created and into which you unpacked the tar balls:
[server node]# export SGE_ROOT=/opt/sge-root
            Installing and Configuring SGE:

As the root user change into the directory SGE_ROOT and run the following command:[server node]# ./util/setfileperm.sh $SGE_ROOT
You will see output similar to the following:
WARNING WARNING WARNING
-----------------------------
We will set the the file ownership and permission to
     UserID: 0
     GroupID: 0
     In          /opt/sge
     directory: -root
We will also install the following binaries as SUID-root:
     $SGE_ROOT/utilbin/<arch>/rlogin
     $SGE_ROOT/utilbin/<arch>/rsh
     $SGE_ROOT/utilbin/<arch>/testsuidroot
     $SGE_ROOT/bin/<arch>/sgepasswd
Do you want to set the file permissions (yes/no) [NO] >>
Enter 'yes' to set the file permissions and the command will
complete.
Next you will begin the actual installation of SGE by running the
command './install_qmaster'. Running this command will lead you
through a series of command line menus and propmts. Below we
show in detail each step that is necesssary along with the output you
should see.
Any entries you should type will be in red. Any action you should
take will be in black.
[server node]# ./install_qmaster
Welcome to the Grid Engine installation
Grid Engine qmaster host installation
Before you continue with the installation please read these hints:
    • Your terminal window should have a size of at least 80x24
      characters
    • The INTR character is often bound to the key Ctrl-C. The term
      >Ctrl-C< is used during the installation if you have the
      possibility to abort the installation
The qmaster installation procedure will take approximately 5-10
minutes.
Hit <RETURN>
Choosing Grid Engine admin user account
You may install Grid Engine that all files are created with the user id
of an unprivileged user.
This will make it possible to install and run Grid Engine in
directories where user >root< has no permissions to create and write
files and directories.
      Grid Engine still has to be started by user >root<
    •
      this directory should be owned by the Grid Engine
    •
      administrator
Do you want to install Grid Engine under an user id other than
>root< (y/n) [y] >>
n
Checking $SGE_ROOT directory
The Grid Engine root directory is:
$SGE_ROOT = /opt/sge-root
If this directory is not correct (e.g. it may contain an automounter
prefix) enter the correct path to this directory or hit <RETURN> to
use default [/opt/sge-root] >>
Hit <RETURN>
ypcat: can't get local yp domain: Local domain name not set
Grid Engine TCP/IP service >sge_qmaster<
There is no service >sge_qmaster< available in your >/etc/services<
file or in your NIS/NIS+ database.
You may add this service now to your services database or choose a
port number. It is recommended to add the service now. If you are
using NIS/NIS+ you should add the service at your NIS/NIS+ server
and not to the local >/etc/services< file.
Please add an entry in the form
sge_qmaster <port_number>/tcp
to your services database and make sure to use an unused port
number.
Please add the service now or press <RETURN> to go to entering a
port number >>
Note: In another terminal edit /etc/services and add the line
sge_qmaster 30000/tcp
When completed enter <RETURN>
Grid Engine TCP/IP service >sge_execd<
There is no service >sge_execd< available in your >/etc/services<
file or in your NIS/NIS+ database.
You may add this service now to your services database or choose a
port number. It is recommended to add the service now. If you are
using NIS/NIS+ you should add the service at your NIS/NIS+ server
and not to the local >/etc/services< file.
Please add an entry in the form
sge_execd <port_number>/tcp
to your services database and make sure to use an unused port
number.
Make sure to use a different port number for the Executionhost as on
the qmaster machine infotext: too few arguments
Please add the service now or press <RETURN> to go to entering a
port number >>
In another terminal edit /etc/services and add the line
sge_execd 30001/tcp
When completed enter <RETURN>
Grid Engine cells
Grid Engine supports multiple cells.
If you are not planning to run multiple Grid Engine clusters or if you
don't know yet what is a Grid Engine cell it is safe to keep the
default cell name default
If you want to install multiple cells you can enter a cell name now.
The environment variable
$SGE_CELL=<your_cell_name>
will be set for all further Grid Engine commands.
Enter cell name [default] >>
Hit <RETURN> to accept default
Grid Engine qmaster spool directory
The qmaster spool directory is the place where the qmaster daemon
stores the configuration and the state of the queuing system.
User >root< on this host must have read/write accessto the qmaster
spool directory.
If you will install shadow master hosts or if you want to be able to
start the qmaster daemon on other hosts (see the corresponding
section in the Grid Engine Installation and Administration Manual
for details) the account on the shadow master hosts also needs
read/write access to this directory.
The following directory
[/opt/sge-root/default/spool/qmaster]
will be used as qmaster spool directory by default!
Do you want to select another qmaster spool directory (y/n) [n] >>
n
Windows Execution Host Support
Are you going to install Windows Execution Hosts? (y/n) [n] >>
n
Verifying and setting file permissions
Did you install this version with >pkgadd< or did you already verify
and set the file permissions of your distribution (y/n) [y] >>
y
Select default Grid Engine hostname resolving method
Are all hosts of your cluster in one DNS domain? If this is the case
the hostnames
>hostA< and >hostA.foo.com<
would be treated as equal, because the DNS domain name
>foo.com< is ignored when comparing hostnames.
Are all hosts of your cluster in a single DNS domain (y/n) [y] >>
y
Making directories
creating directory: default
creating directory: default/common
creating directory: /opt/sge-root/default/spool/qmaster
creating directory: /opt/sge-root/default/spool/qmaster/job_scripts
Hit <RETURN> to continue >>
hit <RETURN>
Setup spooling
Your SGE binaries are compiled to link the spooling libraries during
runtime (dynamically). So you can choose between Berkeley DB
spooling and Classic spooling method.
Please choose a spooling method (berkeleydb|classic) [berkeleydb]
>>
enter <RETURN> to accept default
Hit <RETURN>
The Berkeley DB spooling method provides two configurations!
Local spooling:
The Berkeley DB spools into a local directory on this host (qmaster
host)
This setup is faster, but you can't setup a shadow master host
Berkeley DB Spooling Server:
If you want to setup a shadow master host, you need to use Berkeley
DB Spooling Server!
In this case you have to choose a host with a configured RPC
service. The qmaster host connects via RPC to the Berkeley DB.
This setup is more failsafe, but results in a clear potential security
hole. RPC communication (as used by Berkeley DB) can be easily
compromised.
Please only use this alternative if your site is secure or if you are not
concerned about security.
Check the installation guide for further advice on how to achieve
failsafety without compromising security.
Do you want to use a Berkeley DB Spooling Server? (y/n) [n] >>
n
Berkeley Database spooling parameters
Please enter the Database Directory now, even if you want to spool
locally, it is necessary to enter this Database Directory.
Default: [/opt/sge-root/default/spool/spooldb] >>
Hit <RETURN> to accept the default
Grid Engine group id range
When jobs are started under the control of Grid Engine an additional
group id is set on platforms which do not support jobs. This is done
to provide maximum control for Grid Engine jobs.
This additional UNIX group id range must be unused group id's in
your system. Each job will be assigned a unique id during the time it
is running. Therefore you need to provide a range of id's which will
be assigned dynamically for jobs.
The range must be big enough to provide enough numbers for the
maximum number of Grid Engine jobs running at a single moment
on a single host. E.g. a range like >20000-20100< means, that Grid
Engine will use the group ids from 20000-20100 and provides a
range for 100 Grid Engine jobs at the same time on a single host.
You can change at any time the group id range in your cluster
configuration.
Please enter a range >>
20000-20500
Grid Engine cluster configuration
Please give the basic configuration parameters of your Grid Engine
installation:
<execd_spool_dir>
The pathname of the spool directory of the execution hosts. User
>root< must have the right to create this directory and to write into
it.
Default: [/opt/sge-root/default/spool] >>
Hit <RETURN> to accept the default
Grid Engine cluster configuration (continued)
<administrator_mail>
The email address of the administrator to whom problem reports are
sent.
It's is recommended to configure this parameter. You may use
>none< if you do not wish to receive administrator mail.
Please enter an email address in the form >user@foo.com<.
Default: [none] >>
Hit <RETURN> to accpet default
The following parameters for the cluster configuration were
configured:
execd_spool_dir /opt/sge-root/default/spool
administrator_mail none
Do you want to change the configuration parameters (y/n) [n] >>
n
Creating local configuration
Creating >act_qmaster< file
Adding default complex attributes
Reading in complex attributes.
Adding default parallel environments (PE)
Reading in parallel environments:
PE "make.sge_pqs_api".
Adding SGE default usersets
Reading in usersets:
Userset "defaultdepartment".
Userset "deadlineusers".
Adding >sge_aliases< path aliases file
Adding >qtask< qtcsh sample default request file
Adding >sge_request< default submit options file
Creating >sgemaster< script
Creating >sgeexecd< script
Creating settings files for >.profile/.cshrc<
Hit <RETURN> to continue >>
Hit <RETURN>
qmaster/scheduler startup script
We can install the startup script that will
start qmaster/scheduler at machine boot (y/n) [y] >>
n
Grid Engine qmaster and scheduler startup
Starting qmaster and scheduler daemon. Please wait ...
starting sge_qmaster
starting sge_schedd
Hit <RETURN> to continue >>
Hit <RETURN>
Adding Grid Engine hosts
Please now add the list of hosts, where you will later install your
execution daemons. These hosts will be also added as valid submit
hosts.
Please enter a blank separated list of your execution hosts. You may
press <RETURN> if the line is getting too long. Once you are
finished simply press <RETURN> without entering a name.
You also may prepare a file with the hostnames of the machines
where you plan to install Grid Engine. This may be convenient if
you are installing Grid Engine on many hosts.
Do you want to use a file which contains the list of hosts (y/n) [n] >>
n
Adding admin and submit hosts
Please enter a blank seperated list of hosts.
Stop by entering <RETURN>. You may repeat this step until you are
entering an empty list. You will see messages from Grid Engine
when the hosts are added.
Host(s):
Hit <RETURN> twice
If you want to use a shadow host, it is recommended to add this host
to the list of administrative hosts.
If you are not sure, it is also possible to add or remove hosts after the
installation with <qconf -ah hostname> for adding and <qconf -dh
hostname> for removing this host
Attention: This is not the shadow host installationprocedure. You
still have to install the shadow host separately
Do you want to add your shadow host(s) now? (y/n) [y] >>
n
Creating the default <all.q> queue and <allhosts> hostgroup
root@nodeC.ps.univa.com added "@allhosts" to host group list
root@nodeC.ps.univa.com added "all.q" to cluster queue list
Hit <RETURN> to continue >>
Hit <RETURN>

Scheduler Tuning
The details on the different options are described in the manual.

Configurations
    1. Normal
       Fixed interval scheduling, report scheduling information, actual
       + assumed load
       High
    2.
       Fixed interval scheduling, report limited scheduling
       information, actual load
       Max
    3.
       Immediate Scheduling, report no scheduling information, actual
       load

Enter the number of your preferred configuration and hit
<RETURN>!
Default configuration is [1] >>
1

Using Grid Engine
You should now enter the command:
/opt/sge-root/default/common/settings.csh
if you are a csh/tcsh user or
# . /opt/sge-root/default/common/settings.sh
if you are a sh/ksh user.
This will set or expand the following environment variables:
    • $SGE_ROOT (always necessary)
    • $SGE_CELL (if you are using a cell other than >default<)
    • $SGE_QMASTER_PORT (if you haven't added the service
      >sge_qmaster<)
    • $SGE_EXECD_PORT (if you haven't added the service
      >sge_execd<)
    • $PATH/$path (to find the Grid Engine binaries)
    • $MANPATH (to access the manual pages)
Hit <RETURN> to see where Grid Engine logs messages >>
Hit <RETURN>
Grid Engine messages
Grid Engine messages can be found at:
/tmp/qmaster_messages (during qmaster startup)
/tmp/execd_messages (during execution daemon startup)
After startup the daemons log their messages in their spool
directories.
Qmaster: /opt/sge-root/default/spool/qmaster/messages
Exec daemon: <execd_spool_dir>/<hostname>/messages
Grid Engine startup scripts
Grid Engine startup scripts can be found at:
/opt/sge-root/default/common/sgemaster (qmaster and scheduler)
/opt/sge-root/default/common/sgeexecd (execd)
Do you want to see previous screen about using Grid Engine again
(y/n) [n] >>
n
Your Grid Engine qmaster installation is now completed
Please now login to all hosts where you want to run an execution
daemon and start the execution host installation procedure.
If you want to run an execution daemon on this host, please do not
forget to make the execution host installation in this host as well.
All execution hosts must be administrative hosts during the
installation. All hosts which you added to the list of administrative
hosts during this installation procedure can now be installed.
You may verify your administrative hosts with the command
# qconf -sh
and you may add new administrative hosts with the command
# qconf -ah <hostname>
Please hit <RETURN> >>
Hit <RETURN>
This completes the first part of the SGE installation and
configuration. Before continuing you need to set up your
environment by doing the following:
[server node]#
source /opt/sge-root/default/common/settings.sh
You can verify that nodeC is configured properly to be the SGE
administrative host by running
[server node sge-root]# qconf -sh
nodeC.ps.univa.com
Next nodeC needs to be configured as an execution host. Run the
following command and again enter the indicated values for each
menu choice:
[server node sge-root]# /opt/sge-root/install_execd
Welcome to the Grid Engine execution host installation
If you haven't installed the Grid Engine qmaster host yet, you must
execute this step (with >install_qmaster<) prior the execution host
installation.
For a sucessfull installation you need a running Grid Engine qmaster.
It is also neccesary that this host is an administrative host.

You can verify your current list of administrative hosts with the command:
# qconf -sh
You can add an administrative host with the command:
# qconf -ah <hostname>
The execution host installation will take approximately 5 minutes.
Hit <RETURN> to continue >>
Hit <RETURN>
Checking $SGE_ROOT directory
The Grid Engine root directory is:
$SGE_ROOT = /opt/sge-root
If this directory is not correct (e.g. it may contain an automounter
prefix) enter the correct path to this directory or hit <RETURN> to
use default [/opt/sge-root] >>
Hit <RETURN>
Grid Engine cells
Please enter cell name which you used for the qmaster installation or
press <RETURN> to use [default] >>
Hit <RETURN> for default
Checking hostname resolving
This hostname is known at qmaster as an administrative host.
Hit <RETURN> to continue >>
Hit <RETURN>
Local execd spool directory configuration
During the qmaster installation you've already entered a global execd
spool directory. This is used, if no local spool directory is
configured.
Now you can enter a local spool directory for this host.
Do you want to configure a local spool directory for this host (y/n)
[n] >>
n
Creating local configuration
root@nodeC.ps.univa.com modified "nodeC.ps.univa.com" in
configuration list
Local configuration for host >nodeC.ps.univa.com< created.
Hit <RETURN> to continue >>
Hit <RETURN>
execd startup script
We can install the startup script that will start execd at machine boot
(y/n) [y] >>
n
Grid Engine execution daemon startup
Starting execution daemon. Please wait ...
starting sge_execd
Hit <RETURN> to continue >>
Hit <RETURN>
Adding a queue for this host
We can now add a queue instance for this host:    • it is added to the >allhosts< hostgroup
    • the queue provides 2 slot(s) for jobs in all queues referencing
      the >allhosts< hostgroup
You do not need to add this host now, but before running jobs on this
host it must be added to at least one queue.
Do you want to add a default queue instance for this host (y/n) [y]
>>
y
root@nodeC.ps.univa.com modified "@allhosts" in host group list
root@nodeC.ps.univa.com modified "all.q" in cluster queue list
Using Grid Engine
You should now enter the command:
source /opt/sge-root/default/common/settings.csh
if you are a csh/tcsh user or
# . /opt/sge-root/default/common/settings.sh
if you are a sh/ksh user.
This will set or expand the following environment variables:
   • $SGE_ROOT (always necessary)
   • $SGE_CELL (if you are using a cell other than >default<)
   • $SGE_QMASTER_PORT (if you haven't added the service
     >sge_qmaster<)
   • $SGE_EXECD_PORT (if you haven't added the service
     >sge_execd<)
   • $PATH/$path (to find the Grid Engine binaries)
   • $MANPATH (to access the manual pages)
Hit <RETURN> to see where Grid Engine logs messages >>
Hit <RETURN>
Grid Engine messages
Grid Engine messages can be found at:
/tmp/qmaster_messages (during qmaster startup)
/tmp/execd_messages (during execution daemon startup)
After startup the daemons log their messages in their spool
directories.
Qmaster: /opt/sge-root/default/spool/qmaster/messages
Exec daemon: <execd_spool_dir>/<hostname>/messages
Grid Engine startup scripts
Grid Engine startup scripts can be found at:
/opt/sge-root/default/common/sgemaster (qmaster and scheduler)
/opt/sge-root/default/common/sgeexecd (execd)
Do you want to see previous screen about using Grid Engine again
(y/n) [n] >>
n

Note:
This completes the installation and configuration of SGE.

Testing SGE:
As the root user you should make sure that the SGE daemons are running:

[server node sge-root]# ps auwwwx|grep sge
root 9159 0.0 0.3 106340 3800 ? Sl 10:43 0:00 /opt/sge-root/bin/
lx24-x86/sge_qmaster
root 9179 0.0 0.2 48424 2400 ? Sl 10:43 0:00 /opt/sge-root/bin/
lx24-x86/sge_schedd
root 9610 0.0 0.1 5176 1820 ? S 10:53 0:00 /opt/sge-root/bin/
lx24-x86/sge_execd
If the SGE daemons are not running simply run the following three
commands as root:
/opt/sge-root/bin/lx24-x86/sge_qmaster
/opt/sge-root/bin/lx24-x86/sge_schedd
/opt/sge-root/bin/lx24-x86/sge_execd
Also as the root user you can check the state of the compute node
and the queue:
[server node sge-root]# /opt/sge-root/bin/lx24-x86/qstat -f
queuename                    qtype used/tot. load_avg arch       states
all.q@nodeC.ps.univa.com BIP 0/2             0.00       lx24-x86
Before submitting a job you need to add nodeC as a node from
which submitting jobs is allowed. You can do that using the 'qconf'
command as shown below:
[server node sge-root]# /opt/sge-root/bin/lx24-x86/qconf -as nodec
nodeC.ps.univa.com added to submit host list
Next you can submit a simple test job as shown:
[server node sge-root]# /opt/sge-root/bin/lx24-x86/qsub /opt/sge-
root/examples/jobs/simple.sh
Your job 1 ("simple.sh") has been submitted.
You can query for the state of the job using 'qstat' as shown:
[server node sge-root]# /opt/sge-root/bin/lx24-x86/qstat
job-
      prior     name        user state submit/start at
ID
1     0.55500 simple.sh root r            02/13/2006 11:07:36
...continued
qeue                          slots ja-task-ID
all.q@nodeC.ps.univa.com 1
[server node sge-root]# /opt/sge-root/bin/lx24-x86/qstat
job-ID prior         name        user state submit/start at
1         0.55500 simple.sh root r             02/13/2006 11:07:36
...continued
                   slo
qeue                   ja-task-ID
                   ts
all.q@nodeC.ps.u
                   1
niva.com
[server node sge-root]# /opt/sge-root/bin/lx24-x86/qstat -f
queuename                     qtype used/tot. load_avg arch     states
all.q@nodeC.ps.univa.com BIP 0/2              0.00     lx24-x86

Next use the "Jane User" account to test and make sure that a non-root user can submit and run jobs:

[server ndoe sge-root]# su - jane
Before submitting a job the environment for 'jane' needs to be set up:
[server node ~]$ export SGE_ROOT=/opt/sge-root
[server node ~]$ source /opt/sge-root/default/common/settings.sh
User jane can check the state of SGE:
[server node ~]$ /opt/sge-root/bin/lx24-x86/qstat -f
queuename                     qtype used/tot. load_avg arch     states
all.q@nodeC.ps.univa.com BIP 0/2              0.00     lx24-x86
User jane can submit a job as shown:
[server node ~]$ /opt/sge-root/bin/lx24-x86/qsub /opt/sge-
root/examples/jobs/simple.sh
Your job 2 ("simple.sh") has been submitted.
User jane can query on a job's state as shown:
 [server node ~]$ /opt/sge-root/bin/lx24-x86/qstat
 job-ID prior       name         user state submit/start at
 1        0.00000 simple.sh jane qw 02/13/2006 11:12:57
...continued
                    slo
 qeue                    ja-task-ID
                    ts
 all.q@nodeC.ps.u
                    1
 niva.com

When the job completes user jane should find two files, one for stdout from the job and one for stderr from the job:
[server node ~]$ ls
simple.sh.e2 simple.sh.o2
[server node ~]$ cat simple.sh.o2
Mon Feb 13 11:13:06 CST 2006
Mon Feb 13 11:13:26 CST 2006


Install and Configuration of PBS

Introduction      The Portable Batch System (PBS) is available as Open Source software from
       http://www.OpenPbs.org/. A commercial version can be bought from
       http://www.PBSPro.com/. The PBSPro also offer support for OpenPBS, and at a decent price for academic institutions.

       There exists a very useful collection of user-contributed software/patches for Open PBS at http://www-unix.mcs.anl.gov/openpbs/.

       This HowTo document outlines all the steps required to compile and install the Portable Batch System (PBS) version 2.1, 2.2 and 2.3. Most likely the steps will be the same for the PBSPro software.

       The latest version of PBS is available from http://www.OpenPbs.org/. The PBS documentation available at the Web-site should be handy for in-depth discussion of the points covered in this HowTo.

       We also discuss how to create a PBS script for parallel or serial jobs. The cleanup in an e pilogue script may be required for parallel jobs.

       Accounting Reports may be generated from PBS' accounting files. We provide a simple tool pbsacct that processes and formats the accounting into a useful report.

       Download the latest version of pbsacct from the ftp://ftp.fysik.dtu.dk/pub/PBS/ directory.


     The following steps are what we use to install PBS from scratch on our systems.
          1. Ensure that tcl8.0 and tk8.0 are installed on the system. Look into the PBS docs
             to find out about these packages. The homepage is at http://www.scriptics.com
             /products/tcltk/. Get Linux RPMs from your favorite distribution, or build it
             yourself on other UNIXes.
             If you installed the PBS binary RPMs on Linux, skip to step 4.

          2. Configure PBS for your choice of spool-directory and the central server
             machine (named "zeise" in our examples):
             ./configure --set-server-home=/var/spool/PBS --set-default-server=zeise
             On Compaq Tru64 UNIX make sure that you use the Compaq C-compiler in
             stead of the GNU gcc by doing "setenv CC cc". You should add these flags to the
             above configure command: --set-cflags="-g3 -O2". It is also important that the
             /var/spool/PBS does not include any soft-links, such as /var -> /usr/var, since
             this triggers a bug in the PBS code.

             If you compiled PBS for a different architecture before, make sure to clean up before running configure:
                                  gmake distclean

          3. Run a GNU-compatible in order to build PBS.
                                             make
             On AIX 4.1.5 edit src/tools/Makefile to add a library: LIBS= -lld
             On Compaq Tru64 UNIX use the native Compaq C-compiler:
             gmake CC=cc
             The default CFLAGS are "-g -O2", but the Compaq compiler requires"-g3 -O2"
             for optimization. Set this with:
             ./configure (flags) --set-cflags="-g3 -O2"
             After the has completed, install the PBS files as the root superuser:
                         make
                         gmake install

          4. Create the file in the central server's (zeise) directory /var/spool"nodes" Containing hostnames, see the PBS 2.2 Admin Guide p.8 (Sec. 2.2/PBS/server_priv
             "Installation Overview" point 8.). Substitute the spool-directory name
             /var/spool/PBS b y your own choice (the Linux RPM uses /var/spool/pbs). Check the
             file /var/spool/PBS/pbs_environment and ensure that important environment variables
             (such as the TZ timezone variable) have been included by the installation
             process. Add any required variables in this file.

          5. Initialize the PBS server daemon and scheduler:
             /usr/local/sbin/pbs_server -t create
             /usr/local/sbin/pbs_sched
                                should only be executed once, at the time of installation !!
             The   "-t create"
             The pbs_server and pbs_sched should be started at boot time: On Linux this is
             done automatically by /etc/rc.d/init.d/pbs. Otherwise use your UNIX's standard
             method (e.g. /etc/rc.local) to run the following commands at boot time:
             /usr/local/sbin/pbs_server -a true
             /usr/local/sbin/pbs_sched
             The "-a true" sets the scheduling attribute to True, so that jobs may start
             running.

          6. Create queues using the "qmgr" command, see the manual page for
             "pbs_server_attributes" and "pbs_queue_attributes": List the server configuration by the Print server c ommand. The output can be used as input to qmgr, so this is a way to
             make a backup of your server setup. You may stick the output of qmgr (for
             example, you may use the setup listed below) into a file (removing the first 2
             lines which are actually not valid commands). Pipe this file into qmgr like this: cat
             file | qmgr and everything is configured in a couple of seconds !
             Our current configuration is:
             # qmgr
             Max open servers: 4
             Qmgr: print server
             #
             # Create queues and set their attributes.
             #
             #
             # Create and define queue verylong
             #
             create queue verylong
             set queue verylong queue_type = Execution
             set queue verylong Priority = 40
             set queue verylong max_running = 10
             set queue verylong resources_max.cput = 72:00:00
             set queue verylong resources_min.cput = 12:00:01
             set queue verylong resources_default.cput = 72:00:00
             set queue verylong enabled = True
             set queue verylong started = True
             #
             # Create and define queue long
             #
             create queue long
             set queue long queue_type = Execution
             set queue long Priority = 60
             set queue long max_running = 10
             set queue long resources_max.cput = 12:00:00
             set queue long resources_min.cput = 02:00:01
             set queue long resources_default.cput = 12:00:00
             set queue long enabled = True
             set queue long started = True
             #
             # Create and define queue medium
             #
             create queue medium
             set queue medium queue_type = Execution
             set queue medium Priority = 80
             set queue medium max_running = 10
             set queue medium resources_max.cput = 02:00:00
             set queue medium resources_min.cput = 00:20:01
             set queue medium resources_default.cput = 02:00:00
             set queue medium enabled = True
             set queue medium started = True
             #
             # Create and define queue small
             #
             create queue small
             set queue small queue_type = Execution
             set queue small Priority = 100
             set queue small max_running = 10
             set queue small resources_max.cput = 00:20:00
             set queue small resources_default.cput = 00:20:00
             set queue small enabled = True
             set queue small started = True
             #
             # Create and define queue default
             #
             create queue default
             set queue default queue_type = Route
             set queue default max_running = 10
             set queue default route_destinations = small
             set queue default route_destinations += medium
             set queue default route_destinations += long
             set queue default route_destinations += verylong
             set queue default enabled = True
             set queue default started = True
             #
             # Set server attributes.
             #
             set server scheduling = True
             set server max_user_run = 6
             set server acl_host_enable = True
             set server acl_hosts = *.fysik.dtu.dk
             set server acl_hosts = *.alpha.fysik.dtu.dk
             set server default_queue = default
             set server log_events = 63
             set server mail_from = adm
             set server query_other_jobs = True
             set server resources_default.cput = 01:00:00
             set server resources_default.neednodes = 1
             set server resources_default.nodect = 1
             set server resources_default.nodes = 1
             set server scheduler_iteration = 60
             set server default_node = 1#shared

          7. Install the PBS software on the client nodes, repeating steps 1-3 above.

          8. Configure the PBS nodes so that they know the server: Check that the file
             /var/spool/PBS/server_name c ontains the name of the PBS server (zeise in this
             example), and edit it if appropriate. Also make sure that this hostname resolves
             correctly (with or without the domain-name), otherwise the pbs_server may
             refuse connections from the qmgr command.
             Create the file on all PBS nodes (server and clients)/var/spool/PBS/mom_priv/config

             with the contents:
             # The central server must be listed:
             $clienthost zeise
             where the correct servername must replace "zeise". You may add other
             relevant lines as recommended in the manual, for example for restricting
             access and for logging:
             $logevent 0x1ff
             $restricted *.your.domain.name
             (list the domain names that you want to give access).
             For maintenance of the configuration file, we use rdist to duplicate       /var/spool
             /PBS/mom_priv/config from the server to all PBS nodes.

          9. Start the MOM mini-servers on both the server and the client nodes:
             /usr/local/sbin/pbs_mom
             or "/etc/rc.d/init.d/pbs start" o n Linux. Make sure that MOM is started at boot
             time. See discussion under point 5.
             On Compaq Tru64 UNIX 4.0E+F there may be a problem with starting
             pbs_mom too soon. Some network problem makes pbs_mom report errors in
             an infinite loop, which fills up the logfiles' filesystem within a short time !
             Several people told me that they don't have this problem, so it's not understood
             at present.
             The following section is only relevant if you have this problem on Tru64 UNIX.
             On Tru64 UNIX start pbs_mom from the last entry in /etc/inittab:
             # Portable Batch System batch execution mini-server
             pbsmom::once:/etc/rc.pbs > /dev/console 2>&1
             The file /etc/rc.pbs delays the startup of pbs_mom:
             #!/bin/sh
             #
             # Portable Batch System (PBS) startup
             #
             # On Digital UNIX, pbs_mom fills up the mom_logs directory
             # within minutes after reboot. Try to sleep at startup
             # in order to avoid this.
             PBSDIR=/usr/local/sbin
             if [ -x ${PBSDIR}/pbs_mom ]; then
                   echo PBS startup.
                   # Sleep for a while
                   sleep 120
                   ${PBSDIR}/pbs_mom       # MOM
                   echo Done.
             else
                   echo Could not execute PBS commands !
              fi

         10. Queues defined above do not work until you start them:
              qstart default small medium long verylong
              qenable default small medium long verylong
              This needs to be done only once and for all, at the time when you install PBS.

         11. Make sure that the PBS server has all nodes correctly defined. Use the            pbsnodes -a c ommand to list all nodes.
              Add nodes using the command:
                                       qmgr
              # qmgr
              Max open servers: 4
              Qmgr: create node node99 properties=ev67
              where the node-name is node99 with the properties=ev67. Alternatively, you
              may simply list the nodes in the file /var/spool/PBS/server_priv/nodes:
              server:ts ev67
              node99 ev67
              The :ts indicates a time-shared node; nodes without :ts are cluster nodes where
              batch jobs may execute. The second column lists the properties that you
              associate with the node. Restart the pbs_server after editing manually the
              nodes file.

         12. After you first setup your system, to get the jobs to actually run you need to
              set the server scheduling attribute to true. This will normally be done for you at
              boot time (see point 5 in this file), but for this first time, you will need to do this by hand using the qmgr command:
              # qmgr
              Max open servers: 4
              Qmgr: set server scheduling=true
        Batch job scripts
        Your PBS batch system ought to be fully functional at this point so that you can
        submit batch jobs using the qsub command. For debugging purposes, PBS offers you
        an "interactive batch job" by using the command qsub -I.
        As an example, you may use the following PBS batch script as a template for
        creating your own batch scripts. The present script runs an MPI parallel job on the
        available processors:
        #!/bin/sh
        ### Job name
        #PBS -N test
        ### Declare job non-rerunable
        #PBS -r n
        ### Output files
        #PBS -e test.err
        #PBS -o test.log
        ### Mail to user
        #PBS -m ae
        ### Queue name (small, medium, long, verylong)
        #PBS -q long
        ### Number of nodes (node property ev67 wanted)
        #PBS -l nodes=8:ev67
        # This job's working directory
        echo Working directory is $PBS_O_WORKDIR
        cd $PBS_O_WORKDIR
        echo Running on host `hostname`
        echo Time is `date`
        echo Directory is `pwd`
        echo This jobs runs on the following processors:
        echo `cat $PBS_NODEFILE`
        # Define number of processors
        NPROCS=`wc -l < $PBS_NODEFILE`
        echo This job has allocated $NPROCS nodes
        # Run the parallel MPI executable "a.out"
        mpirun -v -machinefile $PBS_NODEFILE -np $NPROCS a.out
        If you specify #PBS in the script, you will be running a non-parallel (or serial)
        -l nodes=1
        batch job:
        #!/bin/sh
        ### Job name
        #PBS -N test
        ### Declare job non-rerunable
        #PBS -r n
        ### Output files
        #PBS -e test.err
        #PBS -o test.log
        ### Mail to user
        #PBS -m ae
        ### Queue name (small, medium, long, verylong)
        #PBS -q long
        ### Number of nodes (node property ev6 wanted)
        #PBS -l nodes=1:ev6
        # This job's working directory
        echo Working directory is $PBS_O_WORKDIR
        cd $PBS_O_WORKDIR
        echo Running on host `hostname`
        echo Time is `date`
        echo Directory is `pwd`
        # Run your executable
        a.out
        Clean-up after parallel jobs
        If a parallel job dies prematurely for any reason,
        PBS will clean up user processes on the
        master-node only. We (and others) have found
        that often MPI slave-processes are lingering on
        all of the slave-nodes waiting for communication
        from the (dead) master-process.
        At present the only generally applicable way to
        clean up user processes on the nodes allocated to
        a PBS job is to use the PBS epilogue capability
        (see the PBS documentation). The epilogue is
        executed on the job's master-node, only.
        An epilogue script /var/spool/PBS/mom_priv/epilogue
        should be created on every node, containing for
        example this:
        #!/bin/sh
        echo '--------------------------------------'
        echo Running PBS epilogue script
        # Set key variables
        USER=$2
        NODEFILE=/var/spool/PBS/aux/$1
        echo
        echo Killing processes of user $USER on the batch nodes
        for node in `cat $NODEFILE`
        do
                    echo Doing node $node
                    su $USER -c "ssh -a -k -n -x $node skill -v -9 -u $USER"
        done
        echo Done.
        The Secure Shell command ssh m ay be replaced by
        the remote-shell command of your choice. The
        skill ( Super-kill) command is a nice tool available
        from ftp://fast.cs.utah.edu/pub/skill/ , or as part of
        the Linux procps RPM-package.
        On SMP nodes one cannot use the Super-kill
        command, since the user's processes belonging
        to other PBS jobs might be terminated. The
        present solution works correctly only on
        single-CPU nodes.
        An alternative cleanup solution for Linux systems
        is provided by Benjamin Webb of Oxford
        University. This solution may work more reliably
        than the above.
        This page is maintained by: . Last update: 07 Jan
        2003 .
        Copyright © 2003 `Center for Atomic-scale
        Materials Physics' . All rights reserved.
              Home
           [images/CAMP_small.jpg]

Install and Configuration of NAGIOS

What is Nagios?      An enterprise-class monitoring and alerting solution that provides
organizations with extended insight of their IT infrastructure before problems
affect critical business processes.

Requirements:We require following things for Installing Nagios:
      1. Apache and PhP (php is optional)
      2. gcc, glib, glibc-common, gd and gd-devel
             Whole task on RHEL 5 and nagios core.

Installation:
( A ) Installing/checking Dependencies:
As we know we get these repositories already installed, so let's check
whether they are installed or not!..
Its a major step so plesase don't skip this:
Invoke the terminal and write these command one by one:

1 To Check Apache/Http:
rpm -qa | grep httpd

   2. To Check the Gcc :

   3.rpm -qa | grep gcc
rpm -qa | grep glibc glibc-common
rpm -qa | grep gd gd-devel
If everythings come ok lets move ahead...if not install the package using

( B ) Create User And Groups:
We need to create a user Nagios and put it to group
Nadmin
useradd -m Nagios
passwd Nagios
"Set the password for the user Nagios"
groupadd Nadmin
/usr/sbin/usermod -a -G Nadmin Nagios
/usr/sbin/usermod -a -G Nadmin apache
Building And Installing Nagios
Everything set lets begin with installation:

1 Download Nagios tar using:=> wget
http://prdownloads.sourceforge.net/sourceforge/nagio
s/nagios-3.2.0.tar.gz

2 Untar it anywhere
=> tar zxf nagios-3.2.0.tar.gz

3 Get into the directory
=> cd nagios-3.2.0

4 configure the nagios using this command
=> ./configure --with-command-group=Nadmin

5 Run Make commands to build up the files to install
Nagios
=> make all

6 Then use make install to install the init scripts
in directory /etc/rc.d/init.d
=> make install

7 Lets configure permission for holding external
files
=> make install-init

8 Install sample config files which can be found
later in /usr/local/Nagios/etc
=> make install-config

9 This for commandmode installation
=> make install-commandmode

10 Set your Email Id using:
=> vi /usr/local/Nagios/etc/objects/contacts.cfg

Installing Web Interface:After Nagios is Installed lets install the Web interface a key Utility :
1 Install web interface=> make install-webconf

2 Create web interface account. Using this we gonna
login for management
=> htpasswd -c /usr/local/Nagios/etc/htpasswd.users
nagiosadmin

3 Done lets start Apache
=> /sbin/service httpd restart
As we are done with installing both Nagios and web interface we are yet not
completed everything...
What we need now Nagios plugin , so lets begin with that

Installing and Configuring Nagios plugins (install Server and Client)
Note:      Nagios Plugins in installed only and client side Nagios core not install
in Client side.

Download plugins by typing this command in the terminal
=> wget
http://prdownloads.sourceforge.net/sourceforge/nagio
splug/nagios-plugins-1.4.14.tar.gz
Untar the file
=> tar zxf nagios-plugins-1.4.14.tar.gz
Change directory:
=> cd nagios-plugins-1.4.14
Lets configure it :)
=> ./configure --with-nagios-user=Nagios --with-
nagios-group=Nadmin
Again make the files for the installation process
=> make
Install the plugins
=> make install
This is the second part of the Nagios installation Process
Now, we need to start the nagios ,
Here is the process for the same :
To make the nagios utility to work automatically use
this:
=> /sbin/chkconfig --add nagios
Turn the Nagios on :)
=> /sbin/chkconfig nagios on
We need a check config file that can work for for
auto start of nagios. To do that use
=> /usr/local/nagios/bin/nagios
-v /usr/local/nagios/etc/nagios.cfg
Now start the service
/sbin/service nagios start
So here we are done with the nagios installation and configuration ....
For the web interface :
type the following in the URL
http://localhost/nagios orr http://ip/nagios

CONKY

Conky:            It is an console based applications which is used to monitor all
local setting in local and remote system.

Package:            conky-*.el*.rf.i386.rpm.
            conky-*.el*.rf.x_86.rpm

Installation:            rpm -ivh conky-*.el*.rf.*.rpm

Commands:                   conky      (for graphical based applications).

VNSTAT

Vnstat:
     In is an console based Network Traffic monitor.

Package:                      vnstat-*.el*.rf.i386.rpm           (for 32 bit os).
                      vnstat-*.el*.rf.x_86.rpm           (for 64 bit os).

Installation:                      rpm -ivh vnstat-*.el*.rf.*.rpm

Command:                 vnstat
                 vnstat -d (daily report)
                 vnstat -w(weekly report)
                 vnstat -m(monthly report)

HTop

htop:     It is an enhanced version of top command used in ordinary linux
operating system.

Package:            htop-*.el5.rf.i386.rpm.
            htop-*.el5.rf.x_86.rpm.

Installation:            rpm -ivh htop-*.*.rpm

Command:                 htop

SHC (Shell Script Compiler)

SHC:      It is an Shell Script Compiler which is used to built compile with
encryption with data secure and create bin file also crate c file.

Packages:                  shc-*.tgz.

Installation:            ./configure
            make
            make install

Command:                        shc -f a.sh.

That will create extra 2 files.
                  a.sh.x      -    Executable bin file.
                  a.sh.c     -     c files.
                  ./a.sh.x    -     which file should be run.

Note:     Shc is also used to create executable file as along with expiry date.

System Statistics Gathered from /proc

Proc:     It is an rpm package which is used to display System Statistics Gathered
information of System from /proc file system.

Package:            procinfo-*.i386.rpm(for 32 bit Linux).
            procinfo-*.x_86.rpm(for 64 bit Linux).
 
Installation:            rpm -ivh-*.*.rpm.

Command:            procinfo.
            procinfo -f -n 20.

IP Scanner(Graphical Applications)

IP Scan:          It is an package in Linux which is Graphical applications to find the
ip to scan which is used in network and which is not used.

Package Name:            ipscan-*-i386.rpm (for 32 bit Linux).
            ipscan-*-x_86.rpm (for 64 bit Linux).

Installation:            rpm -ivh ipscan-*-*.rpm.

Command:                   ipscan (for Graphical Scanner for IP used in Network).

Linux: Check Network Connection Command

1. ss command: It dump socket (network connection) statistics such
   as all TCP / UDP connections, established connection per protocol
   (e.g., display all established ssh connections), display all the tcp
   sockets in various state such as ESTABLISHED or FIN-WAIT-1
   and so on.
2. netstat command: It can display network connections, routing
   tables, interfaces and much more.
3. tcptrack and iftop commands : Displays information about TCP connections it sees
   on a network interface and display bandwidth usage on an interface by host
   respectively.
       Display Currently Established, Closed, Orphaned
       and Waiting TCP sockets, enter:
       # ss -s
       Sample outputs:
       Total: 529 (kernel 726)
       TCP:   1403 (estab 286, closed 1099, orphaned 1, synrecv 0, timewait 1098/0), ports 774
1 of 9                                                                                         05/12/2011 11:16 AM
Linux: Check Network Connection Command                            http://www.cyberciti.biz/faq/check-network-connec...
        Transport Total     IP        IPv6
        *         726       -         -
        RAW       0         0         0
        UDP       27        13        14
        TCP       304       298       6
        INET      331       311       20
        FRAG      0         0         0
        Or you can use the netstat command as follows:
        # netstat -s
        Sample outputs:
        Ip:
            102402748 total packets received
            3 with invalid addresses
            0 forwarded
            0 incoming packets discarded
            102192035 incoming packets delivered
            95627316 requests sent out
        Icmp:
            6726 ICMP messages received
            167 input ICMP message failed.
            ICMP input histogram:
                destination unreachable: 2353
                timeout in transit: 4
                echo requests: 4329
            10323 ICMP messages sent
            0 ICMP messages failed
            ICMP output histogram:
                destination unreachable: 5994
                echo replies: 4329
        IcmpMsg:
                InType3: 2353
                InType8: 4329
                InType11: 4
                OutType0: 4329
                OutType3: 5994
        Tcp:
            839222 active connections openings
            2148984 passive connection openings
            1480 failed connection attempts
            1501 connection resets received
            281 connections established
            101263451 segments received
            94668430 segments send out
            9820 segments retransmited
            0 bad segments received.
            1982 resets sent
        Udp:
            1024635 packets received
            18 packets to unknown port received.
            0 packet receive errors
            1024731 packets sent
        TcpExt:
            592 invalid SYN cookies received
            396 resets received for embryonic SYN_RECV sockets
            2 packets pruned from receive queue because of socket buffer overrun
Display All Open Network Ports
Use the ss command as follows:
# ss -l
Sample outputs:
Recv-Q Send-Q                  Local Address:Port   Peer Address:Port
0      50                          127.0.0.1:mysql             *:*
0      128                         127.0.0.1:11211             *:*
0      128                                 *:sunrpc            *:*
0      128                                :::www              :::*
0      128                                 *:55153             *:*
0      3                          10.1.11.27:domain            *:*
0      3                       192.168.1.101:domain            *:*
0      3                           127.0.0.1:domain            *:*
# netstat -tulpn
Sample outputs:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address          Foreign Address State  PID/Program name
tcp        0      0 127.0.0.1:3306         0.0.0.0:*       LISTEN 1380/mysqld
tcp        0      0 127.0.0.1:11211        0.0.0.0:*       LISTEN 1550/memcached
tcp        0      0 0.0.0.0:111            0.0.0.0:*       LISTEN 936/portmap
tcp        0      0 0.0.0.0:55153          0.0.0.0:*       LISTEN 1025/rpc.statd
tcp        0      0 10.1.11.27:53          0.0.0.0:*       LISTEN 1343/named
tcp        0      0 192.168.1.101:53       0.0.0.0:*       LISTEN 1343/named
tcp        0      0 127.0.0.1:53           0.0.0.0:*       LISTEN 1343/named
tcp        0      0 0.0.0.0:22             0.0.0.0:*       LISTEN 979/sshd
tcp        0      0 127.0.0.1:631          0.0.0.0:*       LISTEN 1828/cupsd
tcp        0      0 0.0.0.0:7001           0.0.0.0:*       LISTEN 10129/transmission
tcp        0      0 0.0.0.0:25             0.0.0.0:*       LISTEN 1694/master
tcp        0      0 127.0.0.1:953          0.0.0.0:*       LISTEN 1343/named
tcp        0      0 0.0.0.0:8000           0.0.0.0:*       LISTEN 1539/icecast2
tcp6       0      0 :::80                  :::*            LISTEN 1899/apache2
tcp6       0      0 :::53                  :::*            LISTEN 1343/named
tcp6       0      0 :::22                  :::*            LISTEN 979/sshd
tcp6       0      0 ::1:631                :::*            LISTEN 1828/cupsd
tcp6       0      0 :::7001                :::*            LISTEN 10129/transmission
tcp6       0      0 ::1:953                :::*            LISTEN 1343/named
udp        0      0 239.255.255.250:1900   0.0.0.0:*              11937/opera
udp        0      0 239.255.255.250:1900   0.0.0.0:*              11937/opera
udp        0      0 0.0.0.0:111            0.0.0.0:*              936/portmap
udp        0      0 0.0.0.0:777            0.0.0.0:*              1025/rpc.statd
udp        0      0 0.0.0.0:38297          0.0.0.0:*              1025/rpc.statd
udp        0      0 192.168.1.101:33843    0.0.0.0:*              11937/opera
udp        0      0 10.1.11.27:53          0.0.0.0:*              1343/named
udp        0      0 192.168.1.101:53       0.0.0.0:*              1343/named
udp        0      0 127.0.0.1:53           0.0.0.0:*              1343/named
udp        0      0 0.0.0.0:68             0.0.0.0:*              5840/dhclient
udp        0      0 127.0.0.1:11211        0.0.0.0:*              1550/memcached
udp        0      0 0.0.0.0:7001           0.0.0.0:*              10129/transmission
udp        0      0 10.1.11.27:33372       0.0.0.0:*              11937/opera
udp6       0      0 :::53                  :::*                   1343/named
Display All TCP Sockets
Type the ss command as follows:
# ss -t -a
Or use the netstat command as follows:
# netstat -nat
Display All UDP Sockets
Type the ss command as follows:
# ss -u -a
Or use the netstat command as follows:
# netstat -nau
lsof Command
You can use the lsof command follows to list more information about open ports:
# lsof -i :portNumber
# lsof -i tcp:portNumber
# lsof -i udp:portNumber
# lsof -i :80 | grep LISTEN
View Established Connections Only
Use the netstat command as follows:
# netstat -natu | grep 'ESTABLISHED'
Say Hello To tcptrack
The tcptrack command displays the status of TCP connections that it sees on a given
network interface. tcptrack monitors their state and displays information such as state,
source/destination addresses and bandwidth usage in a sorted, updated list very much
like the top command.