ARSC system news for all systems

Menu to filter items by type

Type Downtime News
Machine All Systems linuxws pacman bigdipper fish lsi

Contents for all systems

News Items

"CENTER Old File Removal"

Last Updated: Tue, 17 Dec 2013 -
Machines: linuxws pacman fish
CENTER Old File Removal Begins 01/08/2014
========================================
On January 08, 2014 ARSC will begin automatically deleting old files
residing on the $CENTER filesystem.  The automatic tool will run
weekly and will target files older than 30 days.  The complete
policy describing this old file removal is available online:
http://www.arsc.edu/arsc/support/policy/#storagePolicies

In preparation for the activation of the automated file
removal tool, files targeted for removal will be listed in a
/center/w/purgeList/username directory and viewable by the individual
file owners. This file listing is an estimation only - files may be
deleted despite failing to appear in this listing.

Note: Modification of file timestamp information, data, or metadata
for the sole purpose of bypassing the automated file removal tool
is prohibited.

Users are encouraged to move important but infrequently used
data to the intermediate and long term $ARCHIVE storage
filesystem. Recommendations for optimizing $ARCHIVE file
storage and retrieval are available on the ARSC website:
http://www.arsc.edu/arsc/knowledge-base/long-term-storage-best-pr/index.xml

Please contact the ARSC Help Desk with questions regarding the
automated deletion of old files in $CENTER.

"LDAP Passwords"

Last Updated: Mon, 20 May 2013 -
Machines: linuxws pacman bigdipper fish
    
How to update your LDAP password 
========================================

User authentication and login to ARSC systems uses University 
of Alaska (UA) passwords and follows the LDAP protocol to connect to
the University's Enterprise Directory.  Because of this, users must
change their passwords using the UA Enterprise tools.

While logging into ARSC systems, if you see the following message,
please change your password on https://elmo.alaska.edu

  Password: 
  Your are required to change your LDAP password immediately.
  Enter login(LDAP) password:

Attempts to change your password on ARSC systems will fail.

Please contact the ARSC Help Desk if you are unable to log into
https://elmo.alaska.edu to change your login password.

  

"LSI HW Support Expired"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI Hardware is Now Off Vendor Support
========================================
The hardware vendor support contract for the LSI hardware has expired.
Existing LSI hardware can no longer be repaired under contract
or warranty. If a hardware failure occurs, all or part of the LSI
infrastructure may cease to operate.

All compute hardware still under vendor support is being migrated to
the Arctic Region Supercomputing Center (ARSC) pacman.arsc.edu system
located on the UAF campus.

All storage hardware will continue to support existing LSI file share
services and the LSI compute portal. However, all data is "AT RISK"
of being lost in the event of a catastrophic hardware failure.

All backup policies will continue to be honored until a hardware
failure occurs. Users are strongly encouraged to maintain a backup
of their information and data in case of a catastrophic failure.

"LSI Login Node Retired"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI Login Node, Anyu, Retired
========================================
The compute node "anyu.inbre.alaska.edu" is retired. During the
last maintenance update, the hardware failed to boot properly.
There are insufficient replacement parts to safely maintain the
node for supporting user activity. User data which resided on "anyu"
is available upon request.

"LSI Portal Migration"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI Compute Portal Migration
========================================
The LSI Compute Portal available at
https://biotech.inbre.alaska.edu/portal will begin its migration to
a new host in 2014.

Following the January 2014 scheduled LSI System downtime, a clone of
the LSI Compute Portal will be available at https://biotech.arsc.edu.
This clone will enable bioinformatics users to submit jobs to the
ARSC pacman system hosting LSI compute hardware and a 128 core
bioinformatics node with 2 TB of RAM. Jobs submitted to the LSI
Compute Portal will run on the LSI hardware via the "bio" queue in
pacman's batch scheduling environment.

The original LSI Compute Portal
(https://biotech.inbre.alaska.edu/portal) will remain in operation
through December 2014 and will continue to submit jobs to the remaining
LSI compute hardware which will remain separate from the ARSC pacman
system.

"Login Cluster Retirement"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI "tuxedo" System to be Retired
========================================

    During the May 2014 UAF Fire Alarm and Safety Test Downtime, 
    the "tuxedo.inbre.alaska.edu" LSI login cluster will be
    retired. Replacing the service provided by the tuxedo login cluster
    will be "pacman.arsc.edu", the Penguin Computing Cluster hosted by
    the Arctic Region Supercomputing Center on the UAF campus.

    The pacman system is a 2816 core system offering both interactive
    logins and batch job submissions from the cluster itself and the LSI
    Compute Portal.  The pacman system supports long runtime batch jobs
    and a 128 core node with 2 TB of RAM for bioinformatics applications.

    For information on how to access pacman and the 128 core node
    dedicated to the use of bioinformatics applications, please contact
    ARSC User Support at consult@arsc.edu.

"New Default PrgEnv-pgi"

Last Updated: Wed, 26 Jun 2013 -
Machines: pacman
    
Updated Default PrgEnv-pgi module to 13.4
========================================
In response to noticeable cases in which the PGI 12.10 compiler failed
to generate a working executable, we will be moving the pacman default
PrgEnv-pgi module from PrgEnv-pgi/12.10 to PrgEnv-pgi/13.4.

This will affect users who run the "module load PrgEnv-pgi" command
instead of specifying a particular module version, e.g. "module load
PrgEnv-pgi/13.4" for program compilation or in their job submission
scripts.

If you are currently compiling and running successfully with
PrgEnv-pgi/12.10, you are welcome to continue using that version.
Make sure you review your ~/.profile or ~/.cshrc files and explicitly
load the PrgEnv-pgi/12.10 module instead of "module load PrgEnv-pgi".

If your code is failing to compile or run properly with
PrgEnv-pgi/12.10 (the current system default), we encourage you to
try again using the PrgEnv-pgi/13.4 environment instead. The
"module swap PrgEnv-pgi/12.10 PrgEnv-pgi/13.4" command will switch
versions of the PGI compiler for you.

Please forward any questions regarding this change or any issues with
compiling or running your program on the pacman system to the ARSC
Help Desk.

    
    
  

"PrgEnv"

Last Updated: Wed, 22 Oct 2008 -
Machines: pacman
Programming Environments on pacman
====================================
Compiler and MPI Library versions on pacman are controlled via
the modules package.  New accounts load the "PrgEnv-pgi" module by
default.  This module adds the PGI compilers and the OpenMPI stack 
to the PATH.  

Should you experience problems with a compiler or library in 
many cases a new programming environment may be available.

Below is a description of available Programming Environments:

Module Name      Description
===============  ==============================================
PrgEnv-pgi       Programming environment using PGI
                 compilers and MPI stack (default version).

PrgEnv-gcc       Programming environment using GNU compilers 
                 and MPI stack.


For a list of the latest available Programming Environments, run:

   pacman1 748% module avail PrgEnv-pgi
   ------------------- /usr/local/pkg/modulefiles -------------------
   PrgEnv-pgi/10.5           PrgEnv-pgi/11.2           
   PrgEnv-pgi/9.0.4(default) 


If no version is specified when the module is loaded, the "default"
version will be selected.


Programming Environment Changes
================================
The following is a table of recent additions and changes to the
Programming Environment on pacman.

Updates on 1/9/2013
====================

Default Module Updates
-----------------------
The default modules for the following packages will be updated on 1/9/2013.

  module name          new default        previous default
  ===================  =================  ================
  abaqus               6.11               6.10
  comsol               4.2a               4.3a
  grads                1.9b4              2.0.2
  idl                  8.2                6.4
  matlab               R2011b             R2010a
  ncl                  6.0.0              5.1.1
  nco                  4.1.0              3.9.9
  OpenFoam             2.1.0              1.7.1
  petsc                3.3-p3.pgi.opt     3.1-p2.pgi.debug
  pgi                  12.5               9.0.4
  PrgEnv-pgi           12.5               9.0.4              
  python               2.7.2              2.6.5
  r                    2.15.2             2.11.1
  totalview            8.10.0-0           8.8.0-1
  

Retired Modules
----------------
The following module files will be retired on 1/9/2013.

* PrgEnv-gnu/prep0
* PrgEnv-gnu/prep1
* PrgEnv-gnu/prep2
* PrgEnv-gnu/prep3
* PrgEnv-pgi/prep0
* PrgEnv-pgi/prep1
* PrgEnv-pgi/prep2
* PrgEnv-pgi/prep3

Known Issues:
-------------
* Some users have reported seg faults for applications compiled with
  PrgEnv-pgi/12.5 when CPU affinity is enabled (e.g. --bind-to-core
  or  --mca mpi_paffinity_alone 1).  Applications compiled with 
  PrgEnv-pgi/12.10 do not appear to have this issue.


 

"PrgEnv"

Last Updated: Mon, 02 Jul 2012 -
Machines: fish
Programming Environment on Fish
========================================
Compiler and MPI Library versions on fish are controlled via
the modules package.  All accounts load the "PrgEnv-pgi" module by
default.  This module adds the PGI compilers to the PATH.  

Should you experience problems with a compiler or library in 
many cases a new programming environment may be available.

Below is a description of available Programming Environments:

Module Name      Description
===============  ==============================================
PrgEnv-pgi       Programming environment using PGI
                 compilers and MPI stack (default version).

PrgEnv-cray      Programming environment using Cray compilers 
                 and MPI stack.

PrgEnv-gcc       Programming environment using GNU compilers 
                 and MPI stack.

Additionally multiple compiler versions may also be available.  

Module Name      Description
===============  ==============================================
pgi              The PGI compiler and related tools

cce              The Cray Compiler Environment and related tools

gcc              The GNU Compiler Collection and related tools.


To list the available version of a package use the "module avail pkg" command:

% module avail pgi
-------------------------- /opt/modulefiles --------------------------
pgi/12.3.0(default) pgi/12.4.0


Programming Environment Changes
================================
The following is a table of recent additions and changes to the
Programming Environment on fish.

  Date         Module Name            Description
  ----------   ---------------------  -----------------------------------

"compilers"

Last Updated: Wed, 21 Jun 2006 -
Machines: linuxws
Compilers
========================================
The ARSC Linux Workstations have two suites of compilers available:

* The GNU Compiler suite version 4.0 including:
  - gcc  C compiler
  - g++  C++ compiler
  - gfortran  Fortran 95 compiler

* The Portland Group (PGI) compiler suite version 6.1 including:
  - pgcc C compiler 
  - pgCC C++ compiler
  - pgf90 Fortran 90 compiler
  - pgf95 Fortran 95 compiler


The PGI compilers require several environment variables to be set: 

For ksh/bash users:
===================
export PGI=/usr/local/pkg/pgi/pgi-6.1
pgibase=${PGI}/linux86-64/6.1 
export PATH=$PATH:${pgibase}/bin
if [ -z "$MANPATH" ]; then
    export MANPATH=${pgibase}/man
else
    export MANPATH=${pgibase}/man:$MANPATH
fi
if [ -z "$LD_LIBRARY_PATH" ]; then
    export LD_LIBRARY_PATH=${pgibase}/lib
else
    export LD_LIBRARY_PATH=${pgibase}/lib:$LD_LIBRARY_PATH
fi
unset pgibase

For csh/tcsh users:
===================
setenv PGI /usr/local/pkg/pgi/pgi-6.1/
set pgibase=${PGI}/linux86-64/6.1
setenv PATH ${PATH}:${pgibase}/bin
if ( ! ${?MANPATH} ) then
    setenv MANPATH ${pgibase}/man
else
    setenv MANPATH ${pgibase}/man:${MANPATH}
endif
if ( ! ${?LD_LIBRARY_PATH} ) then
    setenv LD_LIBRARY_PATH ${pgibase}/lib
else
    setenv LD_LIBRARY_PATH ${pgibase}/lib:${LD_LIBRARY_PATH} 
endif   

unset pgibase

"modules"

Last Updated: Mon, 28 Dec 2009 -
Machines: linuxws pacman
Using the Modules Package
=========================

The modules package is used to prepare the environment for various 
applications before they are run.  Loading a module will set the 
environment variables required for a program to execute properly.  
Conversely, unloading a module will unset all environment variables 
that had been previously set.  This functionality is ideal for 
switching between different versions of the same application, keeping 
differences in file paths transparent to the user.

The following modules commands are available:

module avail                - list all available modules
module load {pkg}           - load a module file from environment
module unload {pkg}         - unload a module file from environment
module list                 - display modules currently loaded
module switch {old} {new}   - replace module <old> with module <new>
module purge                - unload all modules

Before the modules package can be used in a script, its init file may
need to be sourced.

On pacman, to do this using tcsh or csh, type:

   source /etc/profile.d/modules.csh

On pacman, to do this using bash, ksh, or sh, type:

   . /etc/profile.d/modules.sh



Known Issues:
=============

2009-09-24  Accounts using bash that were created before 9/24/2009
            are missing the default ~/.bashrc file.  This may cause
            the module command to be unavailable in some instances.

            Should you experience this issue run the following:

            # copy the template .bashrc to your account.
            [ ! -f ~/.bashrc ] && cp /etc/skel/.bashrc ~

            If you continue to experience issues, please contact the 
            ARSC Help Desk.

"modules"

Last Updated: Fri, 14 Jun 2013 -
Machines: fish
The modules package is used to prepare the environment for various 
applications before they are run.  Loading a module will set the 
environment variables required for a program to execute properly.  
Conversely, unloading a module will unset all environment variables 
that had been previously set.  This functionality is ideal for 
switching between different versions of the same application, keeping 
differences in file paths transparent to the user.

The following modules commands are available:

module avail                - list all available modules
module load <pkg>           - load a module file from environment
module unload <pkg>         - unload a module file from environment
module list                 - display modules currently loaded
module switch <old> <new>   - replace module <old> with module <new>
module purge                - unload all modules

Before the modules package can be used in a script, its init file may
need to be sourced.

To do this using tcsh or csh, type:

   source /opt/modules/default/init/<shell>

To do this using bash, ksh, or sh, type:

   . /opt/modules/default/init/<shell>

For either case, replace <shell> with the shell you are using.  
If your shell is bash, for example:

   . /opt/modules/default/init/bash

"queues"

Last Updated: Wed, 17 Dec 2008 -
Machines: pacman
Pacman Queues
========================================

The queue configuration is as described below.  It is subject to
review and further updates.


   Login Nodes Use:
   =================
   The pacman1 and pacman2 login nodes are a shared resource and are 
   not intended for computationally or memory intensive work.  Processes 
   using more than 30 minutes of CPU time on login nodes may be killed 
   by ARSC without warning.  Please use compute nodes or pacman3 through
   pacman9 for computationally or memory intensive work.


   Queues:
   ===============
   Specify one of the following queues in your Torque/Moab qsub script
   (e.g., "#PBS -q standard"):

     Queue Name     Purpose of queue
     -------------  ------------------------------
     standard       General use routing queue, routes to standard_16 queue.
     standard_4     General use by all allocated users. Uses 4-core nodes.
                    
     standard_12    General use by all allocated users. Uses 12-core nodes.
                    
     standard_16    General use by all allocated users. Uses 16-core nodes.
                    
     bigmem         Usable by all allocated users requiring large memory 
                    resources. Jobs that do not require very large memory 
                    should consider the standard queues.  
                    Uses 32-core large memory nodes.
                                       
     debug          Quick turnaround queue for debugging work.  Uses 12-core 
                    and 16-core nodes.
                    
     background     For projects with little or no remaining allocation. 
                    This queue has the lowest priority, however projects
                    running jobs in this queue do not have allocation    
                    deducted. The number of running jobs or processors 
                    available to this queue may be altered based on system load.
                    Uses 16-core nodes.
                    
     shared         Queue which allows more than one job to be placed on a
                    node.  Jobs will be charged for the portion of the 
                    cores used by the job.  MPI, OpenMP and memory intensive
                    serial work should consider using the standard queue 
                    instead.   Uses 4-core nodes.
                      
     transfer       For data transfer to and from $ARCHIVE.  Be sure to 
                    bring all $ARCHIVE files online using batch_stage 
                    prior to the file copy.  

   See 'qstat -q' for a complete list of system queues.  Note, some 
   queues are not available for general use.


   Maximum Walltimes:
   ===================
   The maximum allowed walltime for a job is dependent on the number of 
   processors requested.  The table below describes maximum walltimes for 
   each queue.

   Queue             Min   Max     Max       
                    Nodes Nodes  Walltime Notes
   ---------------  ----- ----- --------- ------------
   standard_4           1   128 240:00:00 10-day max walltime.  
   standard_12          1     6 240:00:00 10-day max walltime.    
   standard_16          1    32  48:00:00 
   debug                1     6  01:00:00 Only runs on 12 & 16 core nodes.
   shared               1     1  48:00:00  
   transfer             1     1  60:00:00
   bigmem               1     4 240:00:00     
   background           1    11  08:00:00 Only runs on 16 core nodes.     


   NOTES:
   * Feb 7, 2013    - The gpu queue and nodes were retired from the compute
                      node poll.  Fish is available for applications requiring
                      GPUs
   * Oct 1, 2012    - Max walltime for transfer increased to 60 hours.
   * Sept 18, 2012  - Removed references to $WORKDIR and $LUSTRE
   * March 2, 2012  - standard_4 was added to the available queues.
                      The $LUSTRE filesystem should be used with the
                      standard_4 queue.  Accessing files in $WORKDIR
                      from the standard_4 queue may result in significant
                      performance degradation.
   * March 14, 2012 - shared queue was moved from 12 core nodes to 4 
                      core nodes.    
                   

   PBS Commands:
   =============
   Below is a list of common PBS commands.  Additional information is
   available in the man pages for each command.

   Command         Purpose
   --------------  -----------------------------------------
   qsub            submit jobs to a queue
   qdel            delete a job from the queue   
   qsig            send a signal to a running job
   

   Running a Job:
   ==============
   To run a batch job, create a qsub script which, in addition to
   running your commands, specifies the processor resources and time
   required.  Submit the job to PBS with the following command.   (For
   more PBS directives, type "man qsub".)

     qsub <script file>

   Sample PBS scripts:
   --------------
   ## Beginning of MPI Example Script  ############
   #!/bin/bash
   #PBS -q standard_12          
   #PBS -l walltime=96:00:00 
   #PBS -l nodes=4:ppn=12
   #PBS -j oe
               
   cd $PBS_O_WORKDIR

   mpirun ./myprog


   ## Beginning of OpenMP Example Script  ############

   #!/bin/bash
   #PBS -q standard_16
   #PBS -l nodes=1:ppn=16
   #PBS -l walltime=8:00:00
   #PBS -j oe

   cd $PBS_O_WORKDIR
   export OMP_NUM_THREADS=16

   ./myprog    
   #### End of Sample Script  ##################



   Resource Limits:
   ==================
   The only resource limits users should specify are walltimes and nodes, 
   ppn limits.  The "nodes" statement requests a job be  allocated a number 
   of chunks with the given "ppn" size.  
  

   Tracking Your Job:
   ==================
   To see which jobs are queued and/or running, execute this
   command:

     qstat -a



   Current Queue Limits:
   =====================
   Queue limits are subject to change and this news item is not always
   updated immediately.  For a current list of all queues, execute:

     qstat -Q

   For all limits on a particular queue:

     qstat -Q -f <queue-name>



   Maintenance
   ============
   Scheduled maintenance activities on Pacman use the Reservation 
   functionality of Torque/Moab to reserve all available nodes on the system.  
   This reservation keeps Torque/Moab from scheduling jobs which would still 
   be running during maintenance.  This allows the queues to be left running
   until maintenance.  Because walltime is used to determine whether or not a
   job will complete prior to maintenance, using a shorter walltime in your 
   job script may allow your job to begin running sooner.  

   e.g.
   If maintenance begins at 10AM and it is currently 8AM, jobs specifying
   walltimes of 2 hours or less will start if there are available nodes.


   CPU Usage
   ==========
   Only one job may run per node for most queues on pacman (i.e. jobs may 
   not share nodes). 
 
   If your job uses fewer than the number of available processors on a node 
   the job will be charged for all processors on the node unless you use the
   "shared" queue

   Utilization for all other queues is charged for the entire node regardless
   of the number of tasks using that node:

   * standard_4 - 4 CPU hours per node per hour
   * standard_12 - 12 CPU hours per node per hour
   * standard_16, debug, background - 16 CPU hours per node per hour
   * bigmem - 32 CPU hours per node per hour

"queues"

Last Updated: Wed, 17 Dec 2008 -
Machines: fish
Fish Queues
========================================

The queue configuration is as described below.  It is subject to
review and further updates.


   Login Nodes Use:
   =================
   Login nodes are a shared resource and are not intended for
   computationally or memory intensive work.  Processes using more
   than 30 minutes of CPU time on login nodes may be killed by ARSC
   without warning.  Please use compute nodes for computationally or
   memory intensive work.


   Queues:
   ===============
   Specify one of the following queues in your Torque/Moab qsub script
   (e.g., "#PBS -q standard"):

     Queue Name     Purpose of queue
     -------------  ------------------------------
     standard       Runs on 12 core nodes without GPUs
     standard_long  Runs longer jobs on 12 core nodes without GPUs.  
     gpu            Runs on 16 core nodes with 1- NVIDIA X2090 GPU per node.
     gpu_long       Runs longer jobs on 16 core nodes with 1- NVIDIA X2090 
                    GPU per node.
     debug          Quick turn around debug queue.  Runs on GPU nodes.
     debug_cpu      Quick turn around debug queue.  Runs on 12 core nodes.
     transfer       For data transfer to and from $ARCHIVE.  
                    NOTE: transfer queue is not yet functional.

   See 'qstat -q' for a complete list of system queues.  Note, some 
   queues are not available for general use.


   Maximum Walltimes:
   ===================
   The maximum allowed walltime for a job is dependent on the number of 
   processors requested.  The table below describes maximum walltimes for 
   each queue.

   Queue             Min   Max     Max       
                    Nodes Nodes  Walltime Notes
   ---------------  ----- ----- --------- ------------
   standard             1    32  24:00:00
   standard_long        1     2 168:00:00 12 nodes are available to this queue. 
   gpu                  1    32  24:00:00     
   gpu_long             1     2 168:00:00 12 nodes are available to this queue.
   debug                1     2   1:00:00 Runs on GPU nodes
   debug_cpu            1     2   1:00:00 Runs on 12 core nodes (no GPU)
   transfer             1     1  24:00:00 Not currently functioning correctly.


   NOTES:
   * August 11, 2012 - transfer queue is not yet functional.    
   * October, 16 2012 - debug queues and long queues were added to fish.            

   PBS Commands:
   =============
   Below is a list of common PBS commands.  Additional information is
   available in the man pages for each command.

   Command         Purpose
   --------------  -----------------------------------------
   qsub            submit jobs to a queue
   qdel            delete a job from the queue   
   qsig            send a signal to a running job
   

   Running a Job:
   ==============
   To run a batch job, create a qsub script which, in addition to
   running your commands, specifies the processor resources and time
   required.  Submit the job to PBS with the following command.   (For
   more PBS directives, type "man qsub".)

     qsub <script file>

   Sample PBS scripts:
   --------------
   ## Beginning of MPI Example Script  ############
   #!/bin/bash
   #PBS -q standard          
   #PBS -l walltime=24:00:00 
   #PBS -l nodes=4:ppn=12
   #PBS -j oe
               
   cd $PBS_O_WORKDIR

   NP=$(( $PBS_NUM_NODES * $PBS_NUM_PPN ))
   aprun -n $NP ./myprog


   ## Beginning of OpenMP Example Script  ############

   #!/bin/bash
   #PBS -q standard
   #PBS -l nodes=1:ppn=12
   #PBS -l walltime=8:00:00
   #PBS -j oe

   cd $PBS_O_WORKDIR
   export OMP_NUM_THREADS=16

   aprun -d $OMP_NUM_THREADS ./myprog    
   #### End of Sample Script  ##################

   NOTE: jobs using the "standard" and "gpu" queues must run compute and memory 
   intensive applications using the "aprun" or "ccmrun" command.  Jobs failing
   to use "aprun" or "ccmrun" may be killed without warning.

   Resource Limits:
   ==================
   The only resource limits users should specify are walltimes and nodes, 
   ppn limits.  The "nodes" statement requests a job be  allocated a number 
   of chunks with the given "ppn" size.  
  

   Tracking Your Job:
   ==================
   To see which jobs are queued and/or running, execute this
   command:

     qstat -a



   Current Queue Limits:
   =====================
   Queue limits are subject to change and this news item is not always
   updated immediately.  For a current list of all queues, execute:

     qstat -Q

   For all limits on a particular queue:

     qstat -Q -f <queue-name>



   Maintenance
   ============
   Scheduled maintenance activities on Fish use the Reservation 
   functionality of Torque/Moab to reserve all available nodes on the system.  
   This reservation keeps Torque/Moab from scheduling jobs which would still 
   be running during maintenance.  This allows the queues to be left running
   until maintenance.  Because walltime is used to determine whether or not a
   job will complete prior to maintenance, using a shorter walltime in your 
   job script may allow your job to begin running sooner.  

   e.g.
   If maintenance begins at 10AM and it is currently 8AM, jobs specifying
   walltimes of 2 hours or less will start if there are available nodes.


   CPU Usage
   ==========
   Only one job may run per node for most queues on fish (i.e. jobs may 
   not share nodes). 
 
   If your job uses fewer than the number of available processors on a node 
   the job will be charged for all processors on the node unless you use the
   "shared" queue

   Utilization for all other queues is charged for the entire node regardless
   of the number of tasks using that node:

   * standard - 12 CPU hours per node per hour
   * standard_long - 12 CPU hours per node per hour
   * gpu - 16 CPU hours per node per hour
   * gpu_long - 16 CPU hours per node per hour
   * debug - 16 CPU hours per node per hour
   * debug_cpu - 12 CPU hours per node per hour

"samples_home"

Last Updated: Wed, 31 Mar 2010 -
Machines: fish
Sample Code Repository
========================

Filename:       INDEX.txt 

Description:    This file contains the name,location, and brief 
                explanation of "samples" included in this Sample 
                Code Repository.  There are several subdirectories within 
                this code repository containing frequently-used procedures, 
                routines, scripts, and code used on this allocated system,
                pacman.  This sample code repository can be accessed from 
                pacman by changing directories to 
                $SAMPLES_HOME, or changing directories to the following 
                location: pacman% /usr/local/pkg/samples.  

                This particular file can be viewed from the internet at:

                http://www.arsc.edu/support/news/systemnews/fishnews.xml#samples_home

Contents:       applications
                jobSubmission
                libraries

*****************************************************************************
Directory:      applications

Description:    This directory contains sample PBS batch scripts for 
                applications installed on fish.

Contents:       abaqus

*****************************************************************************
Directory:      jobSubmission 

Description:    This directory contains sample PBS batch scripts
                and helpful commands for monitoring job progress.  
                Examples include options to submit a jobs such as
                declaring which group membership you belong to
                (for allocation accounting), how to request a particular  
                software license, etc.

Contents:       MPI_OpenMP_scripts 
                MPI_scripts 
                OpenMP_scripts
                
*****************************************************************************
Directory:      libraries

Description:    This directory contains examples of common libraries and 
                programming paradigms.

Contents:       cuda  
                openacc
                scalapack

"samples_home"

Last Updated: Wed, 31 Mar 2010 -
Machines: pacman
Sample Code Repository
========================

Filename:       INDEX.txt 

Description:    This file contains the name,location, and brief 
                explanation of "samples" included in this Sample 
		Code Repository.  There are several subdirectories within 
		this code repository containing frequently-used procedures, 
		routines, scripts, and code used on this allocated system,
		pacman.  This sample code repository can be accessed from 
                pacman by changing directories to 
                $SAMPLES_HOME, or changing directories to the following 
		location: pacman% /usr/local/pkg/samples.  

                This particular file can be viewed from the internet at:

                http://www.arsc.edu/arsc/support/news/systemnews/index.xml?system=pacman#samples_home

Contents:       applications
                bio
                debugging
                jobSubmission
                libraries
                parallelEnvironment
                training

******************************************************************************
Directory:	applications 

Description:    This directory contains sample scripts used to run
                applications installed on pacman.

Contents:       abaqus
                comsol
                gaussian_09
                matlab_dct
                namd
                nwchem
                tau
                vnc
                OpenFOAM

******************************************************************************
Directory:      bio

Description:    This directory contains sample scripts used to run
                BioInformatics applications installed on pacman.

Contents:       mrbayes

******************************************************************************
Directory:	config

Description:    This directory contains configuration files for applications
                which require some customization to run on pacman.

Contents:       cesm_1_0_4
              
******************************************************************************
Directory:	debugging 

Description:    This directory contains basic information on how to start up 
                and use	the available debuggers on pacman.

Contents:       core_files

*****************************************************************************
Directory:	jobSubmission 

Description:	This directory contains sample PBS batch scripts
		and helpful commands for monitoring job progress.  
                Examples include options to submit a jobs such as
		declaring which group membership you belong to
		(for allocation accounting), how to request a particular  
		software license, etc.

Contents:       MPI_OpenMP_scripts 
                MPI_scripts 
		OpenMP_scripts
                Rsync_scripts

*****************************************************************************
Directory:	parallelEnvironment 

Description:    This directory contains sample code and scripts containing 
                compiler options for common parallel programming practices
                including code profiling.  

Contents:       hello_world_mpi

*****************************************************************************
Directory:      training

Description:    This directory contains sample exercises from ARSC 
                training.

Contents:       introToLinux  
                introToPacman

*****************************************************************************

"software"

Last Updated: Wed, 31 Oct 2012 -
Machines: fish
Fish Software
========================================
      python: python version 2.7.2 (2013-02-26)
      This version includes various popular add-ons including 
        numpy, scipy, matplotlib, basemap and more.
             module load python/2.7.2

      abaqus: abaqus version 6.11 (2012-12-26)
      Version 6.12 of abaqus is available via modules:
             module load abaqus/6.11             

      matlab: matlab version R2012b (2012-12-26)
      Matlab R2012a is now available to UAF users via modules:
             module load matlab/R2012b

      matlab: matlab version R2012a (2012-12-07)
      Matlab R2012a is now available to UAF users via modules:
             module load matlab/R2012a

      comsol: comsol version 4.3a (2012-11-30)
      This version of comsol is now available to UAF users via modules:
             module load comsol/4.3a
  
      idl/envi: idl-8.2 and envi 5.0 (2012-10-31)
      IDL version 8.2 and ENVI version 5.0 are now available
      on fish via modules:
             module load idl/8.2

"software"

Last Updated: Tue, 17 Feb 2009 -
Machines: linuxws
Software on Linuxws
========================================
    matlab: matlab-R2012a  (2012-12-07)
        Matlab version R2012a is now available to UAF users via modules:
             module load matlab/R2012a

    comsol: comsol-4.3a (2012-12-07)
        The newest comsol release is now available to UAF users via modules:
             module load comsol/4.3a

    R: R-2.15.0 and R-2.15.1 (2012-08-03)
        The two newest releases of R are now available.
        Load the module to access your preferred version:
             module load r/2.15.0
             module load r/2.15.1

    totalview: totalview-8.10.0-0 (2012-05-23)
        The newest release of the Totalview Debugger is now available with
        CUDA support.  Load the module to access the newest version:
             module load totalview/8.10.0-0

    python: python-2.7.2 (2012-04-20)
        Version 2.7.2 of python is now available via modules
             module load python/2.7.2

    idl/envi: idl-8.1 (2012-02-10)
        Idl/envi version 8.1 is now available via modules
             module load idl/8.1

    comsol: comsol-4.2a.update2 (2012-01-13)
        The newest comsol release is now available via modules
             module load comsol/4.2a.update2

    totalview: totalview.8.9.1-0 (2011-12-08)
        Totalview Debugger version 8.9.1-0 is now available on the linux  
        workstations.  Launch the softwar by first loading the module:
             module load totalview/8.9.1-0 

    matlab: matlab-R2011b  (2011-12-06)
        Matlab version R2011b is now available to UAF users and 
        can be loaded via modules:
             module load matlab/R2011b

    R: R version 2.13.2 (2011-11-28)
        A newer version is now available via modules:
             module load r/2.13.2

    comsol: comsol-4.2a (2011-11-02)
        The newest comsol release is now available via modules
             module load comsol/4.2a

    automake: automake-1.11.1 (2011-11-01)
           Automake version 1.11.1 has been installed in the following location:
             /usr/local/pkg/automake/automake-1.11.1/bin

    autoconf: autoconf-2.68 (2011-11-01)
           Autoconf version 2.68 has been installed in the following location:
             /usr/local/pkg/autoconf/autoconf-2.68/bin

    visit: visit 2.3.2 (2011-10-17)
           Visit 2.3.2 is now available available on Linux Workstation 
           via modules:
             module load visit/2.3.2

    ncl: ncl 6.0.0 (2011-10-17)
           NCL 6.0.0 is now available available on Linux Workstation 
           via modules:
             module load ncl/6.0.0 

    ncview: ncview 2.0 (2011-07-19)
        Ncview version 2.0 is now available via modules
             module load ncview/2.0

    abaqus: abaqus 6.11 (2011-07-19)
        The newest abaqus release is now available via modules
             module load abaqus/6.11

    comsol: comsol-4.2 (2011-06-08)
        The new comsol release is now available via modules
             module load comsol/4.2

    matlab: matlab-7.10.0 (R2010a)  (2010-10-20)
        Matlab 7.10.0 is now available to UAF users.  A module is 
        available for this software and can be loaded with:
                module load matlab-7.10.0
        After loading the module, type 'matlab' at the prompt
        to open a new matlab session.

    mexnc:  mexnc-r3240 (aka mexcdf)  (2010-10-07)
        Mexnc/mexcdf is now available.  To use this software
        with matlab, first load the matlab-7.8.0 module
        then enter the following at the matlab prompt:
        addpath /usr/local/pkg/mexnc/mexcdf-r3240/mexnc

    comsol: comsol-4.0a (2010-09-20)
	The newest version of comsol is now available in
	/usr/local/pkg/comsol/comsol-4.0a. 

    idl: idl-7.1 (2010-05-17)
        idl 7.1 is now available on the linux workstations.  A
        module is available as idl-7.1 for use.

    gsl: gsl-1.13 GNU Scientific Library 1.13 (2010-01-14)
        The newest version of GSL is now available.  This version
        was compiled using the GNU compiler and is now available
        on the Workstations in the following location:
        /usr/local/pkg/gsl/gsl-1.13/

    pgi: pgi-9.0.4 (2010-01-11)
        The pgi 9.0.4 compiler is now available via the
        pgi-9.0.4 module.

    paraview: paraview-3.6.2 (2010-01-11)
        Paraview 3.6.2 is now available via the paraview-3.6.2
        module.

    vapor: vapor-1.5.0 (2009-10-21)
        Vapor 1.5.0 is now available via the vapor-1.5.0 module.

    paraview: paraview-3.6.1 (2009-10-20)
        Paraview 3.6.1 is now available via the paraview-3.6.1
        module.

    idv: idv-2.7u2 (2009-10-20)
        idv 2.7 update 2 is now available via the idv-2.7u2 module.

    subversion: subversion-1.6.3 (2009-09-03)
        The latest version of subversion is available via the
        subversion-1.6.3 module.

    google-earth: google-earth-5.0 (2009-08-31)
        Google Earth 5.0 is now available on the Workstations in the
        following location:
        /usr/local/pkg/google-earth/google-earth-5.0

    ncl: ncl-5.1.1 (2009-08-05)
        The latest version of ncl is now available via the "ncl-5.1.1"
        module.

    git: git-1.6.1.3 (2009-07-28)
	git 1.6.1.3 is now available. This package is available
	by loading the "git-1.6.1.3" module.

    abaqus: Abaqus 6.9 (2009-07-09)
        Abaqus 6.9 is now available.  This package is available
        by loading the "abaqus-6.9" module.

    matlab: matlab-7.8.0 (2009-06-19)
        Matlab 7.8.0 is now available.  A module is 
        available for this software and can be loaded with:
                module load matlab-7.8.0
        After loading the module, type 'matlab' at the prompt
        to open a new matlab session.

    cmake: cmake-2.6.4 
        The latest version of cmake is available via the cmake-2.6.4 
        module.    

    blender: blender-2.48a
	Blender 2.48a is now available on the Workstations.  It is 
	available in a module, as blender-2.48a

    avizo: avizo-6.0 (2009-04-30)
	Avizo 6.0 is now available in the following directory:
	/usr/local/pkg/avizo/avizo60

    visit: visit-1.11.2 (2009-04-29)
	VisIt 1-11.2 is now available via the "visit-1.11.2" module.

    ncl: ncl-5.1.0 (2009-04-29)
        The latest version of ncl is now available via the "ncl-5.1.0"
        module.

    paraview: paraview-3.4.0 (2009-03-10)
	Paraview 3.4.0 is now available. It can be accessed with the
	paraview-3.4.0 module file and is located in the
	/usr/local/pkg/paraview/paraview-3.4.0 directory.

    visit: visit-1.11.1 (2009-03-02)
	VisIt is now available in /usr/local/pkg/visit/visit-1.11.1
	Module files are available as both "visit" and "visit-1.11.1"

    comsol: comsol-3.5a (2009-02-06)
	The newest version of comsol is now available in
	/usr/local/pkg/comsol/comsol-3.5a.  This version appears
	to resolve the previous errors when starting the software.

    matlab: matlab-7.7.0 (2009-02-05)
        The latest version of Matlab is available for 
        use by loading the matlab-7.7.0 module.  
	
    matlab: matlab-7.6.0 (2008-07-24)
        The latest version of Matlab is available for 
        use by loading the matlab-7.6.0 module.  
	
    paraview: paraview-3.0.2 (2007-09-19)
	Paraview version 3.0.2 has been installed into
	/usr/local/pkg/paraview/paraview-3.0.2.  It is 
	available via a module (paraview-3.0.2).

    acml: acml-3.6.0 & acml-4.0.0 AMD Core Math Library (2007-09-14)
	The ACML has been installed and is available in
	/usr/local/pkg/acml. The following versions were installed:
		acml-3.6.0.gcc
		acml-3.6.0.pgi
		acml-4.0.0.gcc
	These libraries are available as of Sep 14th, 2007 and
	the current link was set to point to acml-3.6.0.gcc.

    idv: idv-2.2: Integrated Data Viewer (2007-07-28)
        The new version of idv(2.2) has been installed
        in /usr/local/pkg/idv/idv-2.2 and will be made
        the default version on July 12th, 2007.

"storage"

Last Updated: Mon, 19 Jun 2006 -
Machines: linuxws
Linux Workstation Storage
========================================
The environment variables listed below represent paths.  They are
expanded to their actual value by the shell, and can be used in commands
(i.e. ls $ARCHIVE).  From the command prompt the value and the variable
are usually interchangeable. 

In the listing below, $USER is an environment variable holding your
ARSC username.

   
  Filesystem     Purpose                 Purged   Backed Up   Quota
  -------------  ----------------------  -------  ----------  ------
  $HOME          shared filesystem       No       Yes         4 GB
   
  $SCRATCH       temp filesystem         Yes      No          None (1) 
          
  $CENTER        temp filesystem         Yes      No          750 GB
   
  $ARCHIVE       long term storage       No       Yes         None
  $ARCHIVE_HOME  
  
  
  NOTES:
  (1) Use is limited by the available space on the disk. 
 

  Environment Variable Definitions
  =================================

  Variable        Definition
  --------------  ---------------------
  $HOME           /u1/uaf/$USER  
  $SCRATCH        /scratch/$USER
  $CENTER         /center/w/$USER
  $ARCHIVE        /archive/$HOME
  $ARCHIVE_HOME   /archive/$HOME


-- Home directories are intended primarily for basic account info
   (e.g.  dotfiles). Please use $SCRATCH (your /scratch/$USER directory)
   for compiles, inputs, outputs, etc.
   
   *  The 'quota' command will show quota information for your $HOME
      directory.  

-- The $SCRATCH directories are local to each machine.  
   When moving to another machine you will also need to move
   your files. This file system not backed up, files not accessed in
   over 30 days are purged (deleted).


-- Your $SCRATCH directory is not created by default.  If one does
   not exist on the machine you are using, type 'mkdir $SCRATCH' to
   create one.


-- Purging: Files not accessed in over 30 days in $SCRATCH or $CENTER
   directories are purged, and these directories are not backed up.
   Please store what you want to keep in $ARCHIVE.


-- Long-term backed up storage is only available in your $ARCHIVE
   directory.  As this is an NFS-mounted filesystem from bigdipper, 
   files will be temporarily unavailable when bigdipper is taken down 
   for maintenance.  I/O performance in this directory may be much 
   slower.  Compiles and runs in $ARCHIVE are prohibited.


See http://www.arsc.edu/arsc/resources/storage-resources/index.xml for more information on storage policies at ARSC.


Back to Top