Popular Posts

Tuesday, 7 August 2012

Basic Command While using Postfix Mail Server

Explain the working of local mail submission for postfix?
When a local email message enters the postfix system. Local messages are deposited into the maildrop directory of the Postfix queue by the postdrop command, usually through the sendmail compatibility program. The pickup daemon reads the message from the queue and feeds it to the cleanup daemon. The cleanup daemon processes all inbound mail and notifies the queue manager after it has placed the cleaned-up message into the incoming queue. The queue manager then invokes the appropriate delivery agent to send the message to its next hop or ultimate destination.

What are the important files for postfix server ?

/etc/postfix/main.cf
/etc/postfix/access
/etc/postfix/aliases

Which command checks for configuration problems?
# postfix check

How you will see the queue of postfix server?
#postqueue -p 
or mailq

How can I clear postfix mail server queue?

# postsuper -d ALL

How you will reload the postfix queue?
# postsuper -r ALL

which command is used to find out that postfix is complied with mysql or not?

# postconf -m
nis
regexp
environ
mysql
btree
unix
hash


Explain smtpd_recipient_limit parameter? And what is the default value for this parameter?
The smtpd_recipient_limit parameter can limit the number of recipients allowed in a single incoming message.
The default value for this parameter is 1000.

Explain smtpd_timeout Parameter?

The smtpd_timeout parameter limits the amount of time Postfix waits for an SMTP client request after sending a response. This allows the Postfix administrator to quickly disconnect SMTP servers that "camp out" on the SMTP connection, utilizing system resources for the SMTP connection without actually sending a message.
smtpd_timeout = value
By default, Postfix will assume the value is in seconds.

Explain maximal_queue_lifetime Parameter?
The maximal_queue_lifetime parameter sets the amount of time (in days) that a message remains in the deferred message queue before being returned as undeliverable. The default value is 5 days. Once this value is reached, Postfix returns the message to the sender.

Explain queue_run_delay Parameter?
The queue_run_delay parameter sets the time interval (in seconds) that Postfix scans the deferred message queue for messages to be delivered. The default value for this is 1,000 seconds.

Explain default_destination_concurrency_limit Parameter? The default_destination_concurrency_limit parameter defines the maximum number of concurrent SMTP sessions that can be established with any remote host. This parameter is related to the SMTP maxprocess parameter in the master.cf configuration file. The maximum number of concurrent SMTP sessions cannot exceed the maxprocess value set for the maximum number of SMTP client processes. Thus, if the default maxprocess value of 50 is used, setting the default_destination_concurrency_limit greater than 50 has no effect.

How to scan newly added LUN using rescan-scsi-bus.sh ?

ENV : RHEL 5.4 and later

I suggest you NOT to scan the existing LUNs since I/O operations are still in use and if you scan them it will/may corrupt the file system. So, I always suggest you to scan the new added device or storage. Once you add it, HBA will detect the device and then you can scan this non-existent LUNs to the HBA. As an example you can execute the command like :

---
#rescan-scsi-bus.sh --hosts=1 --luns=2
---

Note : I assume that on host 1/or on HBA 1, lun 2 doesn't exist.

For more details please get help from :

---
#rescan-scsi-bus.s --help

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/rescan-scsi-bus.html
---

Try :)

How to list out number of files inside each directory?

I developed following script and used it :

---
[root@vijay log]# cat ./find_large_small_files.sh
#!/bin/bash
find . -type d -exec ls -ld {} \; &> ./tmpfile
let i=0
for i in `cat ./tmpfile`
do
echo "Directory $i has following no of files : `ls -al $i|wc -l`"
done

Try and modify as per your need. 

what does it mean by "cman expected_votes="1" two_node="1" in cluster.conf ?

For two node clusters ordinarily, the loss of quorum after one out of two nodes fails will prevent the remaining node from continuing (if both nodes have one vote.) Special configuration options can be set to allow the one remaining node to continue operating if the other fails. To do this only two nodes, each with one vote, can be defined in cluster.conf. The two_node and expected_votes values must then be set to 1 in the cman section as follows.

---
cman two_node="1" expected_votes="1"

Basic Idea about GFS file system!

Global File System (GFS) is a shared disk file system for Linux computer clusters. It can maximize the benefits of clustering and minimize the costs.

It does following :

# Greatly simplify your data infrastructure
# Install and patch applications once, for the entire cluster
# Reduce the need for redundant copies of data
# Simplify back-up and disaster recovery tasks
# Maximize use of storage resources and minimize your storage costs
# Manage your storage capacity as a whole vs. by partition
# Decrease your overall storage needs by reducing data duplication
# Scale clusters seamlessly, adding storage or servers on the fly
# No more partitioning storage with complicated techniques
# Add servers simply by mounting them to a common file system
# Achieve maximum application uptime

While a GFS file system may be used outside of LVM, Red Hat supports only GFS file systems that are created on a CLVM logical volume. CLVM is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd, which manages LVM logical volumes in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. Red Hat supports it if gfs is deployed on LVM

GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process.

* While running Red Hat Enterprise Linux 4, convert your GFS file systems to use the DLM lock manager.
* Upgrade your operating system to Red Hat Enterprise Linux 5, converting the lock manager to DLM when you do.

“GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes.

GFS Software Components :

gfs.ko : Kernel module that implements the GFS file system and is loaded on GFS cluster nodes.
lock_dlm.ko : A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.
lock_nolock.ko : A lock module for use when GFS is used as a local file system only. It plugs into the lock harness, lock_harness.ko and provides local locking.

The system clocks in GFS nodes must be within a few minutes of each other to prevent unnecessary inode time-stamp updating. Unnecessary inode time-stamp updating severely impacts cluster performance. Need to use ntpd for accurate time with time server.

A) (On shared disk ) : Create GFS file systems on logical volumes created in Step 1. Choose a unique name for each file system. For more information about creating a GFS file system, refer to Section 3.1, “Creating a File System”.
You can use either of the following formats to create a clustered GFS file system:
Initial Setup Tasks:

1. Setting up logical volumes
2. Making a GFS files system
3. Mounting file systems

#gfs_mkfs -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice
or #mkfs -t gfs -p lock_dlm -t LockTableName -j NumberJournals BlockDevice

OR

#gfs_mkfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice

#mkfs -t gfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice

B)At each node, mount the GFS file systems. For more information about mounting a GFS file system.

Command usage:

#mount BlockDevice MountPoint

mount -o acl BlockDevice MountPoint
The -o acl mount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).

Note :

Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption.

LockProtoName :
Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm. The lock protocol when GFS is acting as a local file system (one node only) is lock_nolock.
LockTableName: This parameter is specified for GFS file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName

* ClusterName, the name of the Red Hat cluster for which the GFS file system is being created.
* FSName, the file system name, can be 1 to 16 characters long, and the name must be unique among all file systems in the cluster.

NumberJournals:

Specifies the number of journals to be created by the gfs_mkfs command. One journal is required for each node that mounts the file system. (More journals than are needed can be specified at creation time to allow for future expansion.)

EXAMPLE :http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_File_System/ch-manage.html

[root@me ~]# gfs_mkfs -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0
[root@me ~]# gfs_mkfs -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1

Before you can mount a GFS file system, the file system must exist , the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. After those requirements have been met, you can mount the GFS file system as you would any Linux file system.

EXAMPLE : mount /dev/vg01/lvol0 /mydata1

Displaying GFS Tunable Parameters : gfs_tool gettune MountPoint

try :)

Sunday, 10 June 2012

PXE-Boot on RHEL-6

. Use Yum utility to Install required rpm
#yum install dhcp tftp-server syslinux httpd nfs-utils system-config-kickstart bind-*

2. Configure Dns server for host name resolution

#cp -p /etc/named.* /var/named/chroot/etc

#cp -p /var/named.* /var/named/chroot/var/named


#rm -rf /etc/named.*

#rm -rf /var/named.*

#cd /var/named/chroot/etc

#vim named.conf
options {
listen-on port 53 { 127.0.0.1; 192.168.0.254; }; <= Define here the ip of dns
listen-on-v6 port 53 { ::1; };
directory
"/var/named";
dump-file
"/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localhost; any; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
zone “example.com” IN {                 <= Here declare the forward zone
type master;
file “for.zone”;
};
zone “0.168.192.in-addr.arpa” IN {   <= Here declare reverse zone
type master;
file “rev.zone”;
}
:wq

#cd /var/named/chroot/var/named

#cp -p named.localhost f.zone

#cp -p named.loopback r.zone

#vim f.zone
$TTL 1D
@
IN SOA server1.example.com. root.server1.example.com. (
                                                   0;                      serial
                                                   1D;                   refresh
                                                   1H ;                  retry
                                                   1W;                   expire
                                                   3H ) ;                minimum
                     NS      server1.example.com.
server1         A        192.168.0.254
desktop1       A        192.168.0.1
desktop2       A        192.168.0.2
desktop3
:wq

#vim r.zone
$TTL @  IN SOA  server1.example.com. server1.example.com. (
                                                    0;               serial
                                                    1D;            refresh
                                                    1H ;           retry
                                                    1W;           expire
                                                    3H ) ;        minimum
                             NS     server1.example.com.
254                      PTR    server1.example.com.
1                          PTR    desktop1.example.com.
2                          PTR    desktop2.example.com.
3                          PTR    desktop3.example.com.
:wq

#chkconfig named on ; /etc/init.d/named restart
2. Now Configure the dhcp server
#vim /etc/dhcp/dhcpd.conf
default-lease-time 600;
max-lease-time 7200;
allow booting;
allow bootp;
authoritative;
subnet 192.168.0.0 netmask 255.255.255.0 {
range 192.168.0.1 192.168.0.11;
next-server 192.168.0.254;          <= tftp Server ip
filename "pxelinux.0";
}
:wq

#chkconfig dhcpd on;/etc/init.d/dhcpd restart

3. Configure tftp server
#vim /etc/xinetd.d/tftp
disabled = no
:x

Mount RHEL6 OS dvd on /media directory & copy files required for tftp server
#mount /dev/scd0 /media

#cp -rv /media/isolinux/* /var/lib/tftpboot

#cp -rv /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot

#mkdir /var/lib/tftpboot/pxelinx.cfg

#cp /var/lib/tftpboot/isolinux.cfg /var/lib/tftpboot/pxelinux.cfg/default

Make the OS dump
#cp -rv /media/* /var/www/html

#vim /var/lib/tftpboot/pxelinux.cfg/default
                                                                  <= Make this entery at bottom
label RHEL6-32.bit
menu label ^Install RHEL6-32.bit Unattended
menu defult
kernel vmlinuz
append initrd=initrd.img linux ks=http://192.168.0.254/ks.cfg
:wq
4. Create a ks file by the name of ks.cg, to make pxe unattended by using system-config-kickstart & save it in /var/www/html

5. Start the service xinetd, httpd & on the tftp server
#chkconfig xinetd on ; /etc/init.d/xinetd restart

#chkconfig tftp on

#chkconfig httpd on ; /etc/init.d/httpd restart

Now pxe is ready to Install single OS.

Monday, 4 June 2012

ssh-keygen Process

[root@test ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
0d:f3:e2:dc:63:be:c9:dd:d3:0c:6c:16:62:6e:55:0e root@NDC-LVA-ePDSJB
[root@test ~]# cd .ssh/
[root@test .ssh]# ll
total 12
-rw------- 1 root root 1675 Jun  5 11:39 id_rsa
-rw-r--r-- 1 root root  401 Jun  5 11:39 id_rsa.pub
-rw-r--r-- 1 root root 1183 Apr 17 19:09 known_hosts
[root@test .ssh]# ssh-copy-id -i id_rsa.pub  (Remote IP)
root@Remote IP's password:

Now try logging into the machine, with "ssh "Remote IP'", and check in:  .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.

[root@test.ssh]# ssh Remote IP

Last login: Mon Jun  4 16:18:03 2012 from Local IP

[root@test ~]# cd .ssh/
[root@test .ssh]# lltotal 8
-rw------- 1 root root 401 Jun  5 11:40 authorized_keys
-rw-r--r-- 1 root root 789 Apr  2 16:11 known_hosts

Thursday, 22 March 2012

Linux Set Date

This is useful if the Linux server time and/or date is wrong, and you need to set it to new values from the shell prompt.

You must login as root user to use date command.

root@vijay~]# date -s "2 OCT 2006 18:00:00" 

root@vijay~]# date --set="2 OCT 2006 18:00:00"

Wednesday, 29 February 2012

RHEL: Linux Bonding, Multiple Network Interfaces (NIC) Into a Single Interface

Bonding is nothing but Linux kernel feature that allows to aggregate multiple like interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover. The following instructions were tested on:


RHEL v5 / 6,
CentOS v5 / 6.


 I am using Red Hat enterprise Linux version 5.0. 


Step - 1: Create a Bond0 Configuration File

Red Hat Enterprise Linux (and its clone such as CentOS) stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create a bond0 config file as follows:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0Append the following linest:
 
DEVICE=bond0
IPADDR=192.168.1.20
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
 
You need to replace IP address with your actual setup. Save and close the file.

Step - 2: Modify eth0 and eth1 config files

Open both configuration using a text editor such as vi/vim, and make sure file read as follows for eth0 interface
# vi /etc/sysconfig/network-scripts/ifcfg-eth0Modify/append directive as follows:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Open eth1 configuration file using vi text editor, enter:
# vi /etc/sysconfig/network-scripts/ifcfg-eth1Make sure file read as follows for eth1 interface:
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Save and close the file.

Step - 3: Load bond driver/module

Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:# vi /etc/modprobe.conf
Append following two lines:alias bond0 bonding
options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options by clickinghere).

Step - 4: Test configuration

First, load the bonding module, enter:
# modprobe bonding
Restart the networking service in order to bring up bond0 interface, enter:
# service network restart# ifconfigMake sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
cat /proc/net/bonding/bond0

Saturday, 25 February 2012

What is Disk Quota ?



Disk space can be restricted by implementing disk quotas which alert a system administrator before a user consumes too much disk space or a partition becomes full. Disk quotas can be configured for individual users as well as user groups.
In addition, quotas can be set not just to control the number of disk blocks consumed but to control the number of inodes (data structures that contain information about files in UNIX file systems). Because inodes are used to contain file-related information, this allows control over the number of files that can be created.

Configuring Disk Quota
To implement disk quotas, use the following steps:
1. Enable quotas per file system by modifying the /etc/fstab file.
2. Remount the file system(s).
3. Create the quota database files and generate the disk usage table.
4. Assign quota policies.


Applying Disk Quota
  • Step 1 - Open /etc/fstab file using vi editor
  • vi /etc/fstab
  • Step 2 - Add usrquota or grpquota to following line
  • LABEL=/home /home ext3 defaults,usrquota 0 0
  • Step 3 – Remount the /home folder or reboot your machine
  • mount –o remount /home
  • Step 4 – Create quota database file
  • quotacheck –cug /home
  • quotaon -vug /home
  • Step 5 – Apply the quota to a user / group using following command
  • edquota –u username
  • or
  • setquota -u username softHDDlimit hardHDDlimit softINODElimit hardINODElimit /location

Quota Commands
  • quota : Run by user to check quota status
  • repquota : Run by the root user to check the quota status for every user
  • edquota –t : Assigns the grace period
  • edquota -g groupname : Assigning Quotas on Group
  • quotaoff -vaug : Disabling quota on everyone
  • quotaon -vaug : Enabling quota on eve

Friday, 24 February 2012

Setting up a PXE-Boot Redhat Linux 5


This documents how to setup a PXE boot server for Linux.
a)
The first thing to note is that you need to setup your own mini-network that is completely disconnected from the network, since part of this process requires setting up a DHCP server which could conflict with the corporate DHCP server if they were both running on the same network simultaneously. So get yourself a switch from IT up front. You do *NOT* need the switch immediately, so just put it aside.

b) Next you'll need to install the following packages using yum server
tftp-server
dhcp
httpd
syslinux

root@vijay~]# yum install tftp-server dhcp httpd syslinux

c)
Now you need to setup the DHCP server. With the FC4 RPM for dhcp, all you need to do is create /etc/dhcpd.conf with the following contents:

root@vijay~]# vim /etc/dhcpd.conf

ddns-update-style interim;
ignore client-updates;
subnet 192.168.0.0 netmask 255.255.255.0 {
option routers 192.168.0.63;
option subnet-mask 255.255.255.0;
option domain-name "";
option time-offset -18000;
range dynamic-bootp 192.168.0.160 192.168.0.191;
default-lease-time 21600;
max-lease-time 43200;
filename "/pxelinux.0";                              (add these two line in dhcpd.conf file)
next-server 192.168.0.1;
}

d)
Next you need to activate tftp within xinetd. All that is neccesary is to change disable=yes to disable=no in /etc/xinetd.d/tftp . Then restart xinetd.
root@vijay~]# vim /etc/xinetd.d/tftp
                   
disable=no

e)
Next you need to copy the files from the given locaton.
root@vijay~]# cp -r   /usr/share/doc/syslinux-3.11/sample/*  /tftpboot/

root@vijay~]# mkdir  /tftpboot/pxelinux.cfg

f)
Now Mount the DVD into /mnt and copy the content.
root@vijay~]# mount  /dev/cdrom /mnt

root@vijay~]# cp  -r /mnt/images/pxeboot/*    /tftpboot

root@vijay~]# cp  -r /mnt/isolinux/*      /tftpboot

root@vijay~]# cp /mnt/isolinux/isolinux.cfg /tftpboot/pxelinux.cfg/default

root@vijay~]# cp -r /usr/lib/syslinux/pxelinux.0  /tftpboot

root@vijay~]# cp -r /usr/lib/syslinux/menu.c32  /tftpboot/pxelinux.cfg

root@vijay~]# cp -r /usr/lib/syslinux/*  /tftpboot

g)
Now install Kickstart Package and configure it according to Requirement. and save the file into /var/ftp/pub (if your are using FTP service).
root@vijay~]# yum install *kickstart*

root@vijay~]# system-config-kickstart

h)
Now create the default pxelinux configuration inside the new file.

root@vijay~]# vim /tftpboot/pxelinux.cfg/default

 default kickstart
 prompt 0
 timeout 600
 display boot.msg
 F1 boot.msg
 F2 options.msg
 F3 general.msg
 F4 param.msg
 F5 rescue.msg
 MENU title network installation pxe  
 label linux
 kernel vmlinuz
 append initrd=initrd.img
 label kickstart
 kernel vmlinuz
 append initrd=initrd.img ks=ftp://192.168.0.11/dump/ks.cfg
 label ks
 kernel vmlinuz
 append ks initrd=initr

i)
Now start dhcpd & vsftpd and activate tftp by running the following:
 
root@vijay~]#/etc/init.d/dhcpd  start
 
root@vijay~]#/etc/init.d/xinetd restart
 
root@vijay~]#/etc/init.d/vsftpd start


##################################################
                                Now Boot Client Machine with PXE
##################################################

Friday, 17 February 2012

ISCSI Creation

root@server~]# yum install scsi-target-utils*
 

root@server~]# /etc/init.d/tgtd restart
 

Now Create a New Partition in server machine

root@server~]# vim /etc/tgtd/targets.conf
####Sample target with one LUN only. Defaults to allow access for all initiators:####
uncomment the following three line and give target name and device address on backing-store######
<target VIJAY>
backing-store /dev/sda9
</target>
:wq


Now goto client system and install iscsi-init*

root@client~]# yum install iscsi-init*


root@client~]# iscsiadm -m discovery -t st -p IP_of_Iscsi_Server


root@client~]# iscsiadm -m node -l


(to display the login information)
###################Now you can create partition using iscsi partition##################
~]# fdisk /dev/sdb


and follow the step to create partition.
###################add the Partition parmanently in /etc/fstab##########
 

root@client~]# vim /etc/fstab
device        mountpoint     filesystem     permission       D.F.   F.F.

/dev/sdb1    /vijay                 ext4              _netdev           0       0
:wq
root@client~]#mount -a
root@client~]# iscsi -m node -u (to logout from iscsi server)

FTP Configuration

Understanding & Managing FTP Server
What is FTP?
FTP protocol (File Transfer Protocol) is, as its name indicates a protocol for transferring files. The implementation of FTP dates from 1971 when a file transfer system between MIT machines (Massachusetts Institute of Technology) was developed. Many RFC have since made improvements to the basic protocol, but the greatest innovations date from July 1973. FTP, the File Transfer Protocol, is one of the original network applications developed with the TCP/IP protocol suite. It follows the standard model for network services, as FTP requires a client and a server , FTP set out to solve the need to publish documents and software so that people could get them easily from other computer systems. On the FTP server, files were organized in a directory structure; users could connect to the server over the network ,and download files from (and possibly upload files to) the server.
The role of FTP protocol:
FTP protocol defines the way in which data must be transferred over a TCP/IP network.
The aim of FTP protocol is to:
 allow file sharing between remote machines
 allow independence between client and server machine system files
 enable efficient data transfer
The FTP model:
FTP protocol falls within a client-server model, i.e. one machine sends orders (the client) and the other awaits requests to carry out actions (the server).
During an FTP connection, two transmission channels are open:
 A channel for commands (control channel)
 A channel for data
So, both the client and server have two processes allowing these two types of information to be managed:
DTP (Data Transfer Process) is the process in charge of establishing the connection and managing the data channel. The server side DTP is called SERVER-DTP, the client side DTP is called USER-DTP.
PI (Protocol Interpreter) interprets the protocol allowing the DTP to be controlled using commands received over the control channel. It is different on the client and the server:
The SERVER-PI is responsible for listening to the commands coming from a USER-PI over the control channel on a data port, establishing the connection for the control channel, receiving FTP commands from the USER-PI over this, responding to them and running the SERVER-DTP.
The USER-PI is responsible for establishing the connection with the FTP server, sending FTP commands, receiving responses from the SERVER-PI and controlling the USER-DTP if needed. When an FTP client is connected to a FTP server, the USER-PI initiates the connection to the server according to the Telnet protocol. The client sends FTP commands to the server, the server interprets
them, runs its DTP, then sends a standard response. Once the connection is established, the server-PI gives the port on which data will be sent to the Client DTP. The client DTP then listens on the specified port for data coming from the server. It is important to note that since the control and data ports are
separate channels, it is possible to send commands from one machine and receive data on another. So, for example it is possible to transfer data between FTP servers by passing through a client to send control instructions and by transferring information between two server processes connected on the
right port. In this configuration, the protocol imposes that the control channels remain open throughout the data transfer. So a server can stop a transmission if the control channel is broken during transmission.
What is vsftpd?
 The Very Secure FTP Server (vsFTPd) is the only FTP server software included in the Red Hat Linux distribution , vsFTPd is becoming the FTP server of choice for sites that need to support thousands of concurrent downloads. It was also designed to secure your systems against most common attacks.
Configuration Files
 /etc/vsftpd/vsftpd.conf      :  Main Configuration File
 /etc/vsftpd/ftpusers           : Contains Users list to deny permanently
 /etc/vsftpd/user_list           : Contains Users list to allow or deny

 FTP uses TCP ports 20 (for ftp data) & 21 (ftp program).
Installing FTP service:
root@vijay~]# yum install vsftpd

root@vijay~]# vim /etc/vsftpd/vsftpd.conf

(if you want to deny all user from login) at the end of file write down the following line.

userlist_deny=NO

and add the name of user to whom you want to grant permission for login in user_list

Starting vsftpd service:

root@vijay~]#service vsftpd start;chkconfig vsftpd on
Client Side Commands For Connecting to FTP Server
root@vijay~]#ftp 0 (for local login)

root@vijay~]#ftp x.x.x.x (for remote login)

For Installing Packages from FTP server

root@vijay~]#rpm -ivh ftp://x.x.x.x/pub/Server/package.rpm

Limiting maximum connections

 VSFTPD allows unlimited connection from the same client IP address. You can easily force

vsftpd ftp server to use limited number of connection. There is a special directive called
 max_per_ip.

root@vijay~]# vim /etc/vsftpd/vsftpd.conf
max_per_ip=3
max_clients=2 ----- max simultaneous connections

Allowing “anonymous” upload to FTP

STEP – 1:

root@vijay~]# vi /etc/vsftpd/vsftpd.conf
anon_upload_enable=YES
chown_uploads=YES
chown_username=daemon
anon_umask=077

STEP -2: Create a directory under – 

root@vijay~]#mkdir /var/ftp/incoming

root@vijay~]#chmod  770  /var/ftp/incoming

root@vijay~]#chown  root:ftp   /var/ftp/incoming

root@vijay~]#setfacl -m u:vijay:rwx  /var/ftp/incoming

STEP – 3: Set the Boolean value:

root@vijay~]#setsebool  -P  allow_ftpd_full_access

root@vijay~]#service vsftpd restart

root@vijay~]# ftp 192.168.0.15
Connected to 192.168.0.14 (192.168.0.14).
220 (vsFTPd 2.2.2)
Name (192.168.0.14:root): vijay
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /var/ftp/pub
250 Directory successfully changed.
ftp> ls
227 Entering Passive Mode (192,168,0,14,67,118).
150 Here comes the directory listing.
-rw-r--r-- 10 0 0 Aug 02 04:35 h1
-rw-r--r-- 10 0 0 Aug 02 04:35 h2
-rw-r--r-- 10 0 0 Aug 02 04:35 h3-rw------- 12 50 0 Aug 02 04:40 popo
drwx------
2 14
50
4096 Aug 02 04:52 raj
226 Directory send OK.

KVM Configuration

KVM Installation and configuration
1. Install the RHEL FOR 64Bit.
2. Create the Yum Server For RHEL 64Bit.
3. Now check the virtualization flag
[root@vijay ~]# egrep '(vmx|svm)' --color=always /proc/cpuinfo
4. To install KVM and virtinst (a tool to create virtual machines), we run
[root@vijay ~]# yum install kvm* qemu* libvirt* python-virtinst*
5. Then start the libvirt daemon:
[root@vijay ~]# /etc/init.d/libvirtd start
6. To check if KVM has successfully been installed, run
[root@vijay ~]# virsh -c qemu:///system list
It should display something like this:
Id Name
State
----------------------------------
7. To do this, we install the package bridge-utils...
[root@vijay ~]# yum install bridge-utils*
8. I disable NetworkManager and enable "normal" networking. NetworkManager is good
for desktops where network connections can change (e.g. LAN vs. WLAN), but on a server
you usually don't change network connections:
[root@vijay ~]# /etc/init.d/NetworkManager stop
[root@vijay ~]# chkconfig NetworkManager off
[root@vijay ~]# /etc/init.d/network restart
9. To configure the bridge, create the file /etc/sysconfig/network-scripts/ifcfg-br0 (please
use the BOOTPROTO, DNS1 (plus any other DNS settings, if any), GATEWAY, IPADDR,
NETMASK and SEARCH values from the /etc/sysconfig/network-scripts/ifcfg-eth0 file):
[root@vijay ~]# vim /etc/sysconfig/network-scripts/ifcfg-br0
################################################################
DEVICE=br0
TYPE=Bridge
VIJAY SINGH
BOOTPROTO=static
GATEWAY=192.168.0.1
IPADDR=192.168.0.100
NETMASK=255.255.255.0
ONBOOT=yes
################################################################
10. Modify /etc/sysconfig/network-scripts/ifcfg-eth0 as follows (comment out
BOOTPROTO, DNS1 (and all other DNS servers, if any), GATEWAY, IPADDR,
NETMASK, and SEARCH and add BRIDGE=br0):
[root@vijay ~]# vim /etc/sysconfig/network-script/ifcfg-eth0
##############################################################
DEVICE=eth0
#BOOTPROTO=none
#DNS1=145.253.2.75
#GATEWAY=192.168.0.1
HWADDR=00:1e:90:f3:f0:02
#IPADDR=192.168.0.100
#NETMASK=255.255.255.0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=br0
##############################################################
11. Then reboot the system:
[root@vijay ~]# init 6
12. Now install the “virt-manager”
[root@vijay ~]# yum install virt-manager*
13. Now run the following command to start virtual machine.
[root@vijay ~]# virt-manager
And Install Your KVM VIRTUAL MACHINE