ARSC HPC Users' Newsletter 325, September 23, 2005

ARSC Announces Post-Doctoral Fellowship Award Recipients

The Arctic Region Supercomputing Center is proud to announce the first aWards Of the center's new Postdoctoral Fellowship Program. We have filled eight positions, and you're invited to read more about this program, and the eight post-docs at:

http://www.arsc.edu/news/archive/postdoc_awards.html

ARSC to Resume Purging of $WRKDIR

This article is early notice that ARSC will resume daily automated purging of the $WRKDIR file systems, on October 31, 2005. ARSC is lengthening its traditional purge period of 10 days three-fold, to 30 days. Commencing October 31st, the purger will run every day, and each day, files which haven't been accessed in 30 or fewer days will be deleted.

The goals of purging include freeing up disk space and improving disk performance. (It was news to the editor, but file access times, on the same physical disk, can vary by 30% or more, and old files tend to accumulate on the fastest parts of the disk.)

The resumption of purging means a number of things, not limited to the following:

  1. For purging, the "age" of a file is based on its last "access" time, not its last "modification" time. For instance, data files read as input by your HPC application are considered to have been "accessed" every time they're read, even if you change them only on rare occasions.

  2. The command "ls -ul" shows time of "access". ("ls -l" shows time of modification. See "man ls.") You might play around with "ls -ul" a bit...

  3. You are (and always have been) responsible for protecting data on $WRKDIR. It has never been backed-up.

  4. If you have valuable data, code, or other files on $WRKDIR, and you have not backed them up lately, this is a great time. "Tar" them up and send to $ARCHIVE.

  5. We recommend against "gzipping" files destined for $ARCHIVE, as gzip takes much longer than the file transfer itself and, when your file is moved to tape on $ARCHIVE, it is automatically compressed in hardware, anyway.

  6. We do recommend making "tar" files, as it takes much longer to store and retrieve multiple small files than one larger file.

  7. ARSC will provide tools to show you which of your files will be subject to deletion on a given day.

  8. If you have unique requirements which will be difficult to meet once purging is resumed, please feel free to contact ARSC Consulting. Similarly, we will be available to discuss any questions or concerns you may have about this.

--

Please watch your email, the MOTD, and "news" items for more information.

Report on Symposium on High-Performance Reconfigurable Computing

[ Thanks to Dr. Greg Newby of ARSC for this report! ]

From August 22 to 24, 2005, experts from around the country participated in a symposium and workshop on high-performance reconfigurable computing at the University of Alaska Fairbanks. This event was organized and sponsored by ARSC, and highlighted ARSC's early involvement in the utilization of field programmable gate array (FPGA) technology in high-performance computing.

FPGAs are part of ARSC's new Cray XD1 computer ("nelchina"), which has been undergoing evaluation and early use during summer 2005. With the involvement of Dr. Tarek El-Ghazawi of the George Washington University, as well as several of Dr. El-Ghazawi's doctoral students who spent the summer at ARSC, an extensive knowledge base of the promise of FPGAs for high-performance computing is being developed.

The symposium consisted of two days of presentations and panel discussions. Presentations themes included:

  • overviews of FPGAs for high-performance computing
  • practical issues for utilizing contemporary systems incorporating FPGAs
  • reports from scientists utilizing these technologies
  • information from vendors about current and planned offerings

After two days of symposium events, many attendees participated in a one-day hands-on workshop addressing utilization FPGAs. This workshop was taught by Greg Woods of Cray, Inc. and utilized ARSC's XD1 system.

For more information about the symposium, visit:

http://www.arsc.edu/news/archive/FPGA.html

Stay tuned for future announcements on the XD1 and related ARSC events.

Matlab Training at ARSC

Due to overwhelming interest in the Matlab training that was offered this summer, ARSC is planning to have Mathworks return to UAF for more training in the near future. In preparation, we would like to ask anyone that is interested to take a minute to answer the following short list of questions.

Your answers will help us understand what will best serve the Matlab users across the campus.

Please copy these questions into an email message and send your reply to Tom Logan at: logan@arsc.edu

--


1)  Assuming that multi-day Matlab classes are offered this fall (before
    the end of the semester), which of these would you be more likely to
    attend:
        ___ Weekday AM only training
        ___ Weekday PM only training
        ___ Weekday All day training
        ___ Weekend training
        ___ Can not attend training during the semester

2)  Which of these topics are of interest to you?
        ___ MATLAB Fundamentals and Programming Techniques
        ___ Advanced MATLAB Programming Techniques
        ___ MATLAB for Building Graphical User Interfaces
        ___ Integrating MATLAB with External Applications
        ___ Statistical Methods in MATLAB
        ___ Simulink for System and Algorithm Modeling
        ___ MATLAB and Simulink for Control Design Acceleration
        ___ MATLAB for Signal Processing
        ___ MATLAB for Image Processing

(Detailed course descriptions are at:
   
http://www.mathworks.com/services/training/courses/index.html)


3)  Do you have other comments/concerns/suggestions about the scheduling
    or content of any Matlab training that may be offered at ARSC?

---

Again, please send your response to Tom Logan at: logan@arsc.edu

Quick-Tip Q & A



A:[[ Here's a Fortran DO loop and a print statement:
  [[
  [[  DO I = N, M, L
  [[    ...
  [[  ENDDO
  [[ 
  [[  PRINT*, "Value of  I, after the loop terminates: ", I
  [[
  [[ Assuming the loop executes all iterations specified (e.g., it doesn't
  [[ contain any "EXIT" statements), can you precompute the value which will
  [[ be printed for I?   And it is?



  Something to remember about Fortran DO loops: the final value of the
  iterator ("I" in the example) is a value never used in the body of the
  loop.  On each iteration, it's compared against the termination value
  ("M"), and if it's out of range according to the sign of the increment
  ("L"), the body is skipped and the loop exited.

  Thus,  I = 102  on exit from this loop:
  
      DO I = 2, 100, 2
       ... 
      ENDDO

  And,  I = 39  on exit from this loop:
  
      DO I = 50, 40, -1
       ... 
      ENDDO

  For "normal" cases this formula provides the desired value:

      I = N + (( M - N + L ) / L ) * L

  However, this doesn't account for loops which execute zero times, like
  this example, which would exit with I = 50:

      DO I = 50, 40, 1
       ... 
      ENDDO




Q: Here are two lines extracted from a C program (could just as easily
   be a Fortran example):
   -------------------------------------------------------
        n = VALUE_OF_N;
   #ifdef VERBOSE
   -------------------------------------------------------


   Here's a two line makefile to compile the program: 
   -------------------------------------------------------
   prog:
           cc -D VALUE_OF_N=$(NNN) -D VERBOSE prog.c
   -------------------------------------------------------


   Here's how you could "make" this, specifying the ultimate value of
   "VALUE_OF_N" (and thus, of "n" as well) on the command line:
   -------------------------------------------------------
   $   make NNN=1000
   -------------------------------------------------------


   And, finally, here's my question.  Is there a way to define or
   undefine the macro "VERBOSE" (which takes no value) from the make
   command line?  I want something like the following, which I know is
   incorrect:

   $   make NNN=1000 -D VERBOSE 

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top