ARSC T3D Users' Newsletter 23, February 17, 1995

ARSC T3D Upgrades

In the next month we will be upgrading the T3D Programming Environment (libraries, tools and compilers) from P.E. 1.1 to P.E. 1.2.

We are also planning to install CF90 and C++ for the T3D (in the next few months). A description of CF90 is given below and a very complete description of C++ for the T3D is given on the CRI web page:


  http://www.cray.com/PUBLIC/product-info/sw/C++/C++.html
I am interested in hearing from users who want to use the CF90 and C++ products as soon as they are available.

Upgrade to the T3D Memory

On February 7th, ARSC upgraded the memory on each PE from 2MWs to 8MWs. If any users have questions about this, please contact Mike Ess.

Upgrade to MAX 1.2

On January 31st, ARSC upgraded to the 1.2 version of MAX, the T3D operating system. If any users notice differences in their codes running on the T3D they should notify Mike Ess.

How much memory is now available to the user with the new 8MW nodes?

On a single T3D node there is now room for a single array of over 7.8 million floating point numbers in C. This number goes down of course as the user or the library routines use heap or stack space but as an upper bound it seems good. Comparing the results to those of newsletter #6 9/30/94 we have:

                   2MW node/MAX 1.1   8MW node/MAX 1.2

  memory                    2097152            8338608
  (words)

  largest array in C        1541000            7832000
  (floating point numbers)

  an estimate of MAX size    554152             506608
  (words)
So we have the nonintuitive result that the O.S. has actually gotten smaller! But the test program used last September was pretty trival so I tried a new program to see how much memory is available to the user. I started with the following small program and used a makefile to play around with the size of a large static array 'a':

  #define NMAX 7800000
  static double a[ NMAX ];
  main()
  {
    int i, j;
    double sum = 0.0;
  
    for( i = 1; i <= NMAX; i++ ) a[ i ] = i; 
    for( i = 1; i <= NMAX; i++ ) sum = sum + a[ i ];
    if( sum != (NMAX*(NMAX+1))/2 ) {
      printf("error in summation %d %f %d\n",NMAX,sum,(NMAX*(NMAX+1))/2);
    } else {
      printf("ok in summation %d %f %d\n",NMAX,sum,(NMAX*(NMAX+1))/2); 
    }
  }
  
          sed 's/7800000/7832769/' sumc.c > sum1.c
          /mpp/bin/cc -X 1 sum1.c
          a.out -npes 1 -base 0x028
  ok in summation 7832769 30676139020065.000000 30676139020065
          sed 's/7800000/7832870/' sumc.c > sum1.c
          /mpp/bin/cc -X 1 sum1.c
          a.out -npes 1 -base 0x028
  mppexec: application too large or bad a.out
  Make: "a.out -npes 1 -base 0x028": Error code 127
So with the current 8MW nodes, the 1.2 version of MAX and the 1.1 Programming Environment the C programmer can have a single array of 7832870 64 bit doubles on a 1 PE executable.

The situation isn't so clear in the Fortran world. If we start with what seems equivalent to the above C program, we might try:


      parameter( NMAX = 7 000 000 )
      real a( NMAX )
      do 10 i = 1, NMAX
              a( i ) = i
   10 continue
      sum = 0.0
      do 20 i = 1, NMAX
              sum = sum + a( i )
   20 continue
      if( sum .ne. ( NMAX * ( NMAX+1 ) ) / 2 ) then
        print *,"error in summation",NMAX, sum,(NMAX*(NMAX+1))/2
      else
        print *,"ok in summation",NMAX, sum,(NMAX*(NMAX+1))/2
      endif
      end
The mppsize output for the above C program and the above Fortran program show that the array 'a' for C is statically allocated at the beginning of the job but that array 'a' in the Fortran program must be dynamically allocated at run time:

C program mppsize output:


  62826616 (decimal) bytes will be initialized from disk.
     36280 (decimal) bytes are required for the symbol table.
      1392 (decimal) bytes are required for the header.
  62864288 (decimal) bytes total
Fortran mppsize mppsize output:

  857320 (decimal) bytes will be initialized from disk.
  137232 (decimal) bytes are required for the symbol table.
    1392 (decimal) bytes are required for the header.
  995944 (decimal) bytes total
Even if the array 'a' is part of a common block it is not initialized from disk like the array 'a' in C. Neither does making it a shared array on a fixed number of PEs make it initialized from disk. Making the executable a 'plastic' executable produces even more inscrutable results:

        sed 's/7 000 000/7215610/' sums.f > sum1.f
        /mpp/bin/cf77  sum1.f
        mppsize a.out
        .
        .
        .
        0 (decimal) bytes will be initialized from disk.
   137232 (decimal) bytes are required for the symbol table.
     1392 (decimal) bytes are required for the header.
  2060560 (decimal) bytes are used for storage of relocatables.
     1496 (decimal) bytes are used for directives.
  2200680 (decimal) bytes total
Using a makefile as before we grow the size of the array 'a' until we have problems. At a size of 7215617 the program runs correctly but at one more element the program rather than terminating with an error, hangs instead. It hangs in a particularly bad way because a CNTRL-C will kill the mppexec and return the denali prompt to the user but the T3D job is still "executing" and must be stopped with the Unix 'kill' command.

  sed 's/7 000 000/7215617/' sums.f > sum1.f
  /mpp/bin/cf77 -X 1 sum1.f
  a.out -npes 1
  ok in summation7215617,  26032567953153.,  26032567953153
  sed 's/7 000 000/7215618/' sums.f > sum1.f
  /mpp/bin/cf77 -X 1 sum1.f
  a.out -npes 1
and the program is now hung.

If the array 'a' is too large to begin with, for example 7216514, the job aborts with the following message:


  sed 's/7 000 000/7216514/' sums.f > sum1.f
  /mpp/bin/cf77 -X 1 sum1.f
  a.out -npes 1
  User core dump completed (./mppcore)
  Make: "a.out -npes 1" terminated due to signal 11
So for this particular program the range for NMAX of 7215617 to 7216513 hangs the application. This behavior has been reported to CRI and it may change when we upgrade to the 1.2 PE and the 6.2 version of the Fortran Compiler. When close to the memory limits the users should check what is running with the mppmon command and kill any runaway jobs.

A CRI Description of CF90 for the T3D


  Subject: CF90_
  ========================================================================
  PAN #67                                                 January 19, 1995

  CF90 Programming Environment is now available for CRAY T3D systems

  The CF90 Programming Environment is now available for CRAY T3D systems and
  can be ordered from the distribution center. This is Cray Research's first
  general release of its Fortran 90 product for the CRAY T3D.  (A version of
  this compiler was made available earlier in 1994 as the "signal processing
  release" of CF90 for MPP.  The libraries and tools in the CF90 and CF77
  Programming Environments for CRAY T3D are the same.)

  The CF90 compiler for CRAY T3D supports most of the Fortran 90 language,
  including the following new features:
        * full array syntax, including assumed shape arrays, allocatable
          arrays, and numerous new intrinsics
        * Fortran 90 pointers
        * user defined (derived) types
        * case statement
        * new declarative syntax
        * non-advancing and namelist I/O

  The following Fortran 90 features are NOT supported in the CF90 compiler on
  the CRAY T3D:
        * modules
        * internal procedures
        * array constructors
        * declarations containing intrinsic functions

  The CF90 compiler for CRAY T3D supports the full Fortran 77 language as
  well as Cray CF77 language extensions except for BUFFER IN, BUFFER OUT, and
  extensions that conflict with the Fortran 90 language description.  The CF90
  Programming Environment for CRAY T3D supports 32-bit real and integer
  data types.  This offers CRAY T3D users the opportunity for improved use of
  memory and increased bandwidth with no adverse impact on computational
  performance.

  Cray Research introduced the CRAFT parallel programming model for the CF77
  Programming Environment for T3D.  At this time, Cray Research does not have
  adequate experience with CRAFT or other implicit parallel programming
  models to ensure a high performance implementation in its CF90 product on
  the CRAY T3D.  Therefore, we have decided to provide CF90 for use with
  message-passing, and provide full CRAFT only in our CF77 product for the
  CRAY T3D. The CF90 Programming Environment can be used with explicit forms
  of communications, such as PVM message-passing or SHMEM get/put shared
  memory transfers.  The CRAFT implicit data and work distribution styles are
  not implemented in the CF90 compiler for CRAY T3D.

  Performance of codes compiled with the CF90 compiler are comparable to
  performance from the CF77 compiler.  The CF90 compiler uses the same code
  optimizer and code generator as the CF77 compiler.

  Pricing for CF90 Programming Environment for CRAY T3D systems is listed below:
                  CF90 Price   CF90 Upgrade    CF90 Upgrade     CF90 Upgrade
                   Paid-up       Paid-Up         Monthly          Monthly
                   License       License         License          Service

   Cray T3D        $ 80,000      $10,000          $ 275          $ 115
   AC32/64/128

   Cray T3D        $147,500      $17,500          $ 485          $ 200
   MC128/256/512
   SC128/256

   Cray T3D        $193,000      $23,000          $ 640          $ 260
   MC1024

      - The upgrade prices compare to new T3D CF77 Programming Environment
        for MPP prices that are effective since 1/95.
      - Quantity discount: Schedule 1
      - No academic/university discount
      - No trial licenses available
      - Service Upgrade fee is the additional amount customers will have to
        pay to upgrade CF77 support to include both CF77 and CF90.
      - Customers upgrading to CF90 on will be allowed to retain CF77.
        Both programming environments are covered by upgrading to the
        CF90 support fee.

  More detailed information about the CF90 Programming Environment for T3D is
  available on hardcopy from the distribution center, Signal-Processing
  (32-bit precision) Support for CRAY T3D Systems, publication SN-2191 1.2.
  An online copy is also available on
  SuperMarket:.../PRODUCT_INFO/SOFTWARE/PROGRAMMING_ENV/
  FORTRAN/CF90/CF90_M.Signal.Proc


  For more information on CF90 Programming Environment for CRAY T3D, contact
  Kathy Nottingham, kln@sdiv, 612-683-7214.

  -------------end of cri description---------------------------------------------

List of Differences Between T3D and Y-MP

The current list of differences between the T3D and the Y-MP is:
  1. Data type sizes are not the same (Newsletter #5)
  2. Uninitialized variables are different (Newsletter #6)
  3. The effect of the -a static compiler switch (Newsletter #7)
  4. There is no GETENV on the T3D (Newsletter #8)
  5. Missing routine SMACH on T3D (Newsletter #9)
  6. Different Arithmetics (Newsletter #9)
  7. Different clock granularities for gettimeofday (Newsletter #11)
  8. Restrictions on record length for direct I/O files (Newsletter #19)
  9. Implied DO loop is not "vectorized" on the T3D (Newsletter #20)
I encourage users to e-mail in differences that they have found, so we all can benefit from each other's experience.
Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top