Popular Posts

Tuesday, 7 August 2012

Basic Command While using Postfix Mail Server

Explain the working of local mail submission for postfix?
When a local email message enters the postfix system. Local messages are deposited into the maildrop directory of the Postfix queue by the postdrop command, usually through the sendmail compatibility program. The pickup daemon reads the message from the queue and feeds it to the cleanup daemon. The cleanup daemon processes all inbound mail and notifies the queue manager after it has placed the cleaned-up message into the incoming queue. The queue manager then invokes the appropriate delivery agent to send the message to its next hop or ultimate destination.

What are the important files for postfix server ?

/etc/postfix/main.cf
/etc/postfix/access
/etc/postfix/aliases

Which command checks for configuration problems?
# postfix check

How you will see the queue of postfix server?
#postqueue -p 
or mailq

How can I clear postfix mail server queue?

# postsuper -d ALL

How you will reload the postfix queue?
# postsuper -r ALL

which command is used to find out that postfix is complied with mysql or not?

# postconf -m
nis
regexp
environ
mysql
btree
unix
hash


Explain smtpd_recipient_limit parameter? And what is the default value for this parameter?
The smtpd_recipient_limit parameter can limit the number of recipients allowed in a single incoming message.
The default value for this parameter is 1000.

Explain smtpd_timeout Parameter?

The smtpd_timeout parameter limits the amount of time Postfix waits for an SMTP client request after sending a response. This allows the Postfix administrator to quickly disconnect SMTP servers that "camp out" on the SMTP connection, utilizing system resources for the SMTP connection without actually sending a message.
smtpd_timeout = value
By default, Postfix will assume the value is in seconds.

Explain maximal_queue_lifetime Parameter?
The maximal_queue_lifetime parameter sets the amount of time (in days) that a message remains in the deferred message queue before being returned as undeliverable. The default value is 5 days. Once this value is reached, Postfix returns the message to the sender.

Explain queue_run_delay Parameter?
The queue_run_delay parameter sets the time interval (in seconds) that Postfix scans the deferred message queue for messages to be delivered. The default value for this is 1,000 seconds.

Explain default_destination_concurrency_limit Parameter? The default_destination_concurrency_limit parameter defines the maximum number of concurrent SMTP sessions that can be established with any remote host. This parameter is related to the SMTP maxprocess parameter in the master.cf configuration file. The maximum number of concurrent SMTP sessions cannot exceed the maxprocess value set for the maximum number of SMTP client processes. Thus, if the default maxprocess value of 50 is used, setting the default_destination_concurrency_limit greater than 50 has no effect.

How to scan newly added LUN using rescan-scsi-bus.sh ?

ENV : RHEL 5.4 and later

I suggest you NOT to scan the existing LUNs since I/O operations are still in use and if you scan them it will/may corrupt the file system. So, I always suggest you to scan the new added device or storage. Once you add it, HBA will detect the device and then you can scan this non-existent LUNs to the HBA. As an example you can execute the command like :

---
#rescan-scsi-bus.sh --hosts=1 --luns=2
---

Note : I assume that on host 1/or on HBA 1, lun 2 doesn't exist.

For more details please get help from :

---
#rescan-scsi-bus.s --help

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/rescan-scsi-bus.html
---

Try :)

How to list out number of files inside each directory?

I developed following script and used it :

---
[root@vijay log]# cat ./find_large_small_files.sh
#!/bin/bash
find . -type d -exec ls -ld {} \; &> ./tmpfile
let i=0
for i in `cat ./tmpfile`
do
echo "Directory $i has following no of files : `ls -al $i|wc -l`"
done

Try and modify as per your need. 

what does it mean by "cman expected_votes="1" two_node="1" in cluster.conf ?

For two node clusters ordinarily, the loss of quorum after one out of two nodes fails will prevent the remaining node from continuing (if both nodes have one vote.) Special configuration options can be set to allow the one remaining node to continue operating if the other fails. To do this only two nodes, each with one vote, can be defined in cluster.conf. The two_node and expected_votes values must then be set to 1 in the cman section as follows.

---
cman two_node="1" expected_votes="1"

Basic Idea about GFS file system!

Global File System (GFS) is a shared disk file system for Linux computer clusters. It can maximize the benefits of clustering and minimize the costs.

It does following :

# Greatly simplify your data infrastructure
# Install and patch applications once, for the entire cluster
# Reduce the need for redundant copies of data
# Simplify back-up and disaster recovery tasks
# Maximize use of storage resources and minimize your storage costs
# Manage your storage capacity as a whole vs. by partition
# Decrease your overall storage needs by reducing data duplication
# Scale clusters seamlessly, adding storage or servers on the fly
# No more partitioning storage with complicated techniques
# Add servers simply by mounting them to a common file system
# Achieve maximum application uptime

While a GFS file system may be used outside of LVM, Red Hat supports only GFS file systems that are created on a CLVM logical volume. CLVM is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd, which manages LVM logical volumes in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. Red Hat supports it if gfs is deployed on LVM

GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process.

* While running Red Hat Enterprise Linux 4, convert your GFS file systems to use the DLM lock manager.
* Upgrade your operating system to Red Hat Enterprise Linux 5, converting the lock manager to DLM when you do.

“GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes.

GFS Software Components :

gfs.ko : Kernel module that implements the GFS file system and is loaded on GFS cluster nodes.
lock_dlm.ko : A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.
lock_nolock.ko : A lock module for use when GFS is used as a local file system only. It plugs into the lock harness, lock_harness.ko and provides local locking.

The system clocks in GFS nodes must be within a few minutes of each other to prevent unnecessary inode time-stamp updating. Unnecessary inode time-stamp updating severely impacts cluster performance. Need to use ntpd for accurate time with time server.

A) (On shared disk ) : Create GFS file systems on logical volumes created in Step 1. Choose a unique name for each file system. For more information about creating a GFS file system, refer to Section 3.1, “Creating a File System”.
You can use either of the following formats to create a clustered GFS file system:
Initial Setup Tasks:

1. Setting up logical volumes
2. Making a GFS files system
3. Mounting file systems

#gfs_mkfs -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice
or #mkfs -t gfs -p lock_dlm -t LockTableName -j NumberJournals BlockDevice

OR

#gfs_mkfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice

#mkfs -t gfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice

B)At each node, mount the GFS file systems. For more information about mounting a GFS file system.

Command usage:

#mount BlockDevice MountPoint

mount -o acl BlockDevice MountPoint
The -o acl mount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).

Note :

Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption.

LockProtoName :
Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm. The lock protocol when GFS is acting as a local file system (one node only) is lock_nolock.
LockTableName: This parameter is specified for GFS file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName

* ClusterName, the name of the Red Hat cluster for which the GFS file system is being created.
* FSName, the file system name, can be 1 to 16 characters long, and the name must be unique among all file systems in the cluster.

NumberJournals:

Specifies the number of journals to be created by the gfs_mkfs command. One journal is required for each node that mounts the file system. (More journals than are needed can be specified at creation time to allow for future expansion.)

EXAMPLE :http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_File_System/ch-manage.html

[root@me ~]# gfs_mkfs -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0
[root@me ~]# gfs_mkfs -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1

Before you can mount a GFS file system, the file system must exist , the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. After those requirements have been met, you can mount the GFS file system as you would any Linux file system.

EXAMPLE : mount /dev/vg01/lvol0 /mydata1

Displaying GFS Tunable Parameters : gfs_tool gettune MountPoint

try :)