Difference between revisions of "HMMER"

From SNIC Documentation
Jump to: navigation, search
Line 43: Line 43:
  
 
Users should not have to worry about this since the HMMER default behaviour is to run on all the CPU cores it can detect on the compute node, which is nearly always the most desirable. However, should you need to control this for any reason, the behaviour can be controlled by the <code>--ncpus</code> command line option, or the <code>$HMMER_NCPUS</code> environment variable, which should already be set to the correct value if you are using a preinstalled HMMER version on a SNIC resource.
 
Users should not have to worry about this since the HMMER default behaviour is to run on all the CPU cores it can detect on the compute node, which is nearly always the most desirable. However, should you need to control this for any reason, the behaviour can be controlled by the <code>--ncpus</code> command line option, or the <code>$HMMER_NCPUS</code> environment variable, which should already be set to the correct value if you are using a preinstalled HMMER version on a SNIC resource.
 +
 +
== Links ==
 +
* [http://hmmer.janelia.org/ Official website]
 +
* [ftp://selab.janelia.org/pub/software/hmmer3/3.0/Userguide.pdf HMMER-3 documentation] (pdf)
 +
* [http://selab.janelia.org/software/hmmer/2.3.2/hmmer-2.3.2.tar.gz HMMER-2.3.2 release] (contains pdf documentation)

Revision as of 14:27, 25 February 2011

HMMER is a software package for working with profile hidden Markov models (HMM) of known regions in proteins.

Responsible person: User:Joel Hedlund (NSC)

General info

An HMM is a statistical model that describes the known sequence variations within a specific group of proteins that may be of special interest; for example a protein family with known function, or a domain containing a well studied interaction surface or an active site. HMM is a machine learning technique [1] where the models are built from training examples that are known good members, and where the finished models can be used to reliably classify and annotate new or poorly understood protein sequences in an automated fashion. Large libraries of trusted HMMs (such as Pfam) are of course immensely beneficial, as they can be used to automatically classify large portions of newly sequenced genomes, directly as they become available.

The HMMER package contains applications for working with HMMs, for example for:

  • Building and calibrating HMMs.
  • Matching an HMM against a sequence database (for finding new members).
  • Matching a sequence against an HMM database (for finding new sequence features).


Versions

There are two verions of HMMER that can conceivably be useful:

  • HMMER-2.3.2: Old stable version.
  • HMMER-3.0: Fast, but backwards incompatible and non-feature-complete.

Their implementations and output (and potentially also the actual results) are vastly different, so ongoing projects are not recommended to switch between them. For new project, it is highly recommended to spend some time to deduce which version is the most suitable.

HMMER-3 may seem like an obvious choice; it is much faster than its predecessor and it is currently used in large scale production (e.g. by Pfam), and it is also promoted as the official main HMMER version. However, HMMER-3.0 is not feature complete. Especially, the old default alignment behavior (glocal, hmm_ls) is missing, so if this feature is necessary: choose HMMER-2.3.2.


Computational considerations

Work locally

Many of the features in HMMER require access to database flatfiles, and standard practice when running a compute cluster is to copy all necessary files to a node local directory before any work is done with them. This behaviour is highly encouraged on most resources, since multiple simultaneous accesses to the same large files on a shared disk is likely to cause problems for all computations currently running on the resource, and not only for the owner of the badly behaving jobs. For this reason, most SNIC resources have amenities in place to aid you in running your HMMER jobs in an optimal manner (for example prepare_db and $HMMER_DB_DIR, described for example here).

Do not run out of memory

If possible, you should ensure that you have enough RAM to hold the database as well as the results and still have some headroom. This ensures that HMMER will not need to read data from disk unnecessarily, which otherwise would cause significant slowdown. This is less important with HMMER-2.3.2, since the HMM-sequence alignment implementation is so CPU intensive that memory and disk considerations are less likely to have an impact on runtime. Nevertheless, ensuring that the database files remain cached can be done for example by:

  • Choose a system with enough RAM
    Multiprocessor systems generally have more memory than single processor systems, and the database will also require proportionally less memory, since only one copy is needed in the OS file cache regardless of the number of processors using it.
  • Partition the search space
    For huge databases or very restricted amounts available memory it may be required to split the database into manageable chunks and process them as separate jobs.

Use your processors wisely

Users should not have to worry about this since the HMMER default behaviour is to run on all the CPU cores it can detect on the compute node, which is nearly always the most desirable. However, should you need to control this for any reason, the behaviour can be controlled by the --ncpus command line option, or the $HMMER_NCPUS environment variable, which should already be set to the correct value if you are using a preinstalled HMMER version on a SNIC resource.

Links