ARSC HPC Users' Newsletter 281, November 7, 2003

Klondike Open November 10th

The ARSC X1 (klondike) has been in use by a limited number of "pioneer" users for over a month. We're happy to announce that beginning Monday, November 10, it will be open to all approved HPCMPO and academic users.

Requirements for access to klondike are essentially identical to those for access to chilkoot, yukon, and icehawk. Interested ARSC academic users should contact consult@arsc.edu to request account activation.

The scalable vector system is a new concept. As it is the upgrade to both the T3E and SV1ex, we'll be learning together how to manage the X1's mix of MPP and PVP jobs. Once you're on the system, please read the multiple "news" items and check out our evolving web document, "Getting Started on the Cray X1" .

We look forward to helping you port your codes and get maximum performance out of this exciting machine.

ARSC Announces Installation of Cray X1

[ ARSC press release, 4 November 2003 ]

Scientists are gearing up to begin production runs on the Arctic Region Supercomputing Center's newest computer, a Cray X1. The system, which was accepted by the center last week, is a 128-processor vector processing computer called Klondike and serial #6.

Pioneer users of Klondike from the Naval Postgraduate School (NPS), are already optimizing a coupled ice-ocean model to run on the system.

"The speed of this new computer is allowing us to plan a project of a magnitude that we would not have attempted on other systems," said Wieslaw Maslowski of NPS. "The speed-ups on the tests we've run suggest a performance of one hundred times better than what we were seeing on previous Cray computers."

The researchers will be using Klondike to run a pan arctic, eddy resolving coupled ice-ocean model of the last fifty years forced with recently released atmospheric reanalysis data for 1957 to the present from the European Centre of Medium-Range Weather Forecasts (ECMWF, Era-40). The model will provide important insights into the operation of the Arctic Ocean system, including its variability and recent decreases in the ice cover and thickness that have been demonstrated through satellite and submarine data. Understanding of future changes in the arctic region may become critical from both commercial and defense perspectives. If warming in the arctic continues, there will be at least summer access to northern sea routes connecting the Pacific Ocean and Europe, an alternative route to the Panama Canal.

"If this warming continues, the U.S. Navy may soon need to focus on the Arctic Ocean (after a break since the cold war) to meet operational requirements in this area related to search and rescue, and tactical- and defense-related activities," says Maslowski. "Realistic simulations of arctic environmental change will allow us to understand the history of the Northern Polar Region, predict scenarios of future change, as well as to investigate the limits of predictability."

"This project is contributing to a topic that is significant to both the arctic and the world," said ARSC director Frank Williams. "We're pleased to be able to provide the tools that will make this level of scientific study possible."

Taming the Hpmcount Beast

[ Thanks to Kate Hedstrom of ARSC for this article. ]

As mentioned in newsletter #251 , the command for getting performance information on the IBM is called hpmcount. Hpmcount for Power4 (iceflyer) has a different set of groups and options compared to the Power3 (icehawk). The complete documentation is available at:

http://www.sdsc.edu/SciApps/IBM_tools/ibm_toolkit/HPM_2_4_3.html

and a short reminder can be gotten from "hpmcount -h".

The default is for the hpmcount output to go to stdout along with the normal output from your job. When using multiple processes in MPI jobs, all processes will send output to the same stdout file. One useful option is "-o file" which will, instead, create one file per process, with names like file_<NNNN>.<PPPP> (where "<NNNN>" is the process number and "<PPPP>" is the PID number of process 0). For example, "file_0000.3456". With 16 processes, this would give you a directory with 16 hpmcount output files.

No one wants to look through all 16 files, so I have written a Perl script to extract information. This script works whether you have the 16 individual files or all 16 outputs in one file. It only works for the default performance counter group of 60, but can be modified for the other useful groups if you need to look at cache issues, for instance.

I ran my job inside a loadleveller script with:


  poe /usr/local/bin/hpmcount -g 60 -o hpmcount ./oceanM ocean_bench.in
which produced the 16 hpmcount files. The lines of output I am interested in are:

  Total execution time (wall clock time): 123.622415 seconds
  Maximum resident set size                    : 41684 Kbytes
  Flip rate (flips / WCT)                    :         229.951 Mflip/sec
  Flips / user time                          :         245.685 Mflip/sec
I want to extract the longest time, the largest set size, and the totals for the flips numbers. Flips are floating point instructions per second and are similar to Flops as reported on the Crays.

My script is called hpmavg and is run with the command:


   % ./hpmavg hpmcount*
and produces output like this:

  Total (wall) time : 123.622415 from 16 procs
  Max size          : 43852 from 16 procs
  Flip rate         : 3714.56 total or 232.16 per proc
  Flips / user time : 3771.631 total or 235.7269375 per proc
The script isn't anything special, but might give you ideas for creating your own hpmcount reports. Here it is:

#!/usr/bin/perl -w
#
# Report on a collection of hpmcount files on IBM.
# Usage: hpmavg file1 [file2 ...]

use strict;

my $time = 0;
my $size = 0;
my $sum1 = 0;
my $sum2 = 0;
my $num1 = 0;
my $num2 = 0;
my $num3 = 0;
my $num4 = 0;

while (<>) {
    if (/\(wall clock time\)\s*:\s+(\d+\.\d+)/ ) {
        $num1++;
        $time = ($1 > $time) ? $1 : $time;
    }
    if (/resident set size\s+:\s+(\d+)/ ) {
        $num2++;
        $size = ($1 > $size) ? $1 : $size;
    }
    if (/Flip rate \(flips \/ WCT\)\s+:\s+(\d+\.\d+)/ ) {
        $num3++;
        $sum1 += $1;
    }
    if (/Flips \/ user time\s+:\s+(\d+\.\d+)/ ) {
        $num4++;
        $sum2 += $1;
    }
}
my $avg1 = $sum1 / $num3;
my $avg2 = $sum2 / $num4;
print "Total (wall) time : $time from $num1 procs\n";
print "Max size          : $size from $num2 procs\n";
print "Flip rate         : $sum1 total or $avg1 per proc\n";
print "Flips / user time : $sum2 total or $avg2 per proc\n";

ARSC at SC 2003

For those who are not attending the SC2003 conference , here's a list of ARSC's activities. If you're attending, be sure to look us up.

--

The ARSC booth (#305) shows how we're meeting our primary mission of supporting science and engineering research in the high-latitudes. Meet staff, ask questions, get materials, etc...

--

ARSC will contribute cycles from our IBM p690 server to a demonstration computational grid spanning all (yes all) continents:

http://www.hlrs.de/news-events/2003/sc2003/HPC-CHALLENGE/

--

Ed Kornkven of ARSC and Anderew Johnson of AHPCRC are presenting a tutorial

Vector Performance Programming

Time: Monday, November 17, 8:30AM - 12:00PM Abstract: This tutorial will provide an overview of the current state of the art regarding vector computer systems and their application to science.

--

Guy Robinson, recently of ARSC, and still of CUG, is chair of an open discussion (known as a "Birds of a feather" session, or "BOF"):

Cray X1 Programming Environments and Experiences

Time: Tuesday, November 18, 12:00PM - 1:00PM Location: room# 36-37 Description: This BOF will bring together those with an interest in programming the Cray X1.

--

Tom Baring and Andrew Lee are presenting a poster:

Title: SX-6 Benchmark Results and Comparisons

Posters will be up all week, but the reception is at: Time: Tuesday, November 18, 5:00PM - 7:00PM Location: Lobby #2

--

Here in Fairbanks, ARSC will join Access Grid events. Attend SC 2003 without leaving home!

SC Global 2003:

November 18-20, 2003 8:30am-1:30pm ADT Location: Butrovich Building room 109, on the UAF campus

For a schedule of events, go to: http://www.sc-conference.org/sc2003/global.html (Note: On this web page, all times are Mountain)

Questions? Contact: Paul Mercer voice: 474-6110 email: mercer@arsc.edu

Quick-Tip Q & A



A:[[ Here are the first four lines of a file that just goes on and on:
  [[  
  [[  USER     DATE           CMDS         REAL          SYS         USER
  [[  jimbob   ALL          2267.0     674011.6       2113.5     258037.0
  [[  bobbob   ALL           109.0       1335.0        570.8         98.2
  [[  amysue   ALL            58.0    1223863.9       3003.7    1186547.6
  [[   
  [[ The columns remain consistent for the entire file.  
  [[ I want to sort this on fields like "REAL" and "USER," and thought I
  [[ could just use Unix "sort"... but the data is delimited by varying
  [[ numbers of spaces and I can't figure it out.  Doing it by hand
  [[ is taking forever!  Can anyone help?


  #
  # From Richard Griswold:
  #
  Sort has the '-k' option to specify the column to sort on.  The trick to
  overcoming the varying number of spaces is to also use the '-b' option
  to ignore leading blanks.  Finally use '-n' to sort numerically.  For
  example, to sort on the "REAL" column (column 4), you can use:
  
    sort -b -n -k 4 myfile
  
  Or in shorthand:
  
    sort -bnk 4 myfile
  
  Or in obfuscated mode:)
  
    sort -k 4bn myfile
  
  
  #
  # Thanks to Martin Luthi:
  #
  Use the sort field indicator +n. For example sorting the third column
  would be:
  
    mybox> sort +3 yourfile 
  
  Or 
  
    mybox> sort -k 3 yourfile 
  
  Many more examples are given in "Unix Power Tools", O'Reilly
  
  
  #
  # And thanks to Greg Newby:
  #
  The Unix "sort" command is smart enough to skip fields of varying
  numbers of tabs or spaces.  Use "+X" to skip to field "X" (the first
  field is #1).  Also, use "-n" to sort numerically rather than lexically,
  and "-r" to reverse the sort.
  
  If you want to do more complex things (such as extract certain fields,
  or add information) consider these commands.  Use the "man" command to
  get basic usage information.
  
      cut  (extracts certain fields or columns)
      awk  (pattern matching)
      perl (800-lb gorilla big brother of awk)
  
  Sample output (I ditched the heading line, which is not part of what you
  want to sort):
  
  # cat a.txt
  jimbob   ALL          2267.0     674011.6       2113.5     258037.0
  bobbob   ALL           109.0       1335.0        570.8         98.2
  amysue   ALL            58.0    1223863.9       3003.7    1186547.6
  
  -- To sort lexically on the whole line:
  # sort a.txt
  amysue   ALL            58.0    1223863.9       3003.7    1186547.6
  bobbob   ALL           109.0       1335.0        570.8         98.2
  jimbob   ALL          2267.0     674011.6       2113.5     258037.0
  
  -- To sort in reverse (largest to smallest), numerically, on field 2:
  # sort +2 -rn a.txt
  jimbob   ALL          2267.0     674011.6       2113.5     258037.0
  bobbob   ALL           109.0       1335.0        570.8         98.2
  amysue   ALL            58.0    1223863.9       3003.7    1186547.6
  
  -- To sort in smallest to largest, numerically, on field 5:
  # sort -n +5 a.txt
  bobbob   ALL           109.0       1335.0        570.8         98.2
  jimbob   ALL          2267.0     674011.6       2113.5     258037.0
  amysue   ALL            58.0    1223863.9       3003.7    1186547.6



Q: Sorry, I can't convert decimal to hex in my head.  Anyone have a
   handy way to convert numbers between bases?  A Unix utility or
   something better, perhaps?

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top