ARSC T3D Users' Newsletter 114, December 6, 1996

The ARSC T3D Users' Newsletter Takes a Break

The ARSC T3D Users' Newsletter is going to take a 2-3 months vacation and return refreshed as, perhaps, the T3E Newsletter.

I'd be interested in feedback from readers. Has the Newsletter been useful? Do you like the format (weekly, short, and friendly)? How would you feel about bi-weekly newsletters? Would you rather receive a table of contents only, with the content available via the web and ftp? Does the content suit your needs? Other suggestions?

Galaxy Formation Research on ARSC's T3D

[ The 100th newsletter profiled the work of several big users of ARSC's T3D. Here's a short report by another, Fabio Governato of the University of Washington, who was unreachable for that newsletter. Thanks to Fabio for this. ]

To study galaxy formation in different environments, Governato and Gardner started studying what we call "voids", i.e. huge regions of space, several tens of Mpc across, where the density is several times less than average. These regions are commonly found both in the real Universe and in cosmological simulations.

Together with selected regions including clusters and more general large scale structure, we aim at obtaining the merging rate and the rate of growth of dark matter halos in these regions testing different cosmoligcal models. Once the hydrodynamical code will be ready, then we plan to include the effects of dissipation in the barionic component, that is supposed to cool and collapse at the center of these halos, so originating galaxies as we see them.

Two papers including simulations made at the T3D at ARSC are available on my web page:

http://www-hpcc.astro.washington.edu/faculty/fabio/index.html The Local Group as a test of cosmological models (submitted to New Astronomy)

Prospects for Cosmology with Cluster Mass Profiles to appear in ApJL

The Local group paper includes a movie obtained from ARSC simulations: http://www-hpcc.astro.washington.edu/faculty/fabio/Movie2.mpeg , which shows a region in a CDM universe that contains a binary system similar to our Milky Way Andromeda binary system. Also most of the big color pictures where made from data obtained at ARSC.

Quick-Tip Q & A


A: {{ If you expect to create mppcore files, how can you arrange for them
      to be stored on /tmp without ever taking up space in the current
      directory? }}

  #Thanks to three readers who responded.  Here are their full replies:

REPLY #1

There are several things that can be done, some more elegant than others, and some a more accurate reply to this question than others. Also some of these things depend on whether you are running on the MPP side of the T3D, or on the PVP side (or on a CRAY other than a T3D).

This question is similar to the question in a previous newsletter, which was how to find and remove all your core files. The answer to that was to run:


     find . \( -name core -o -name mppcore \) -exec rm -i {} \;

This command could be put into a .logout file so your core and mppcore files are removed when you logout. However during your login session, or if your code runs and creates a core file when you aren't logged in, you will still have these core files and they will count against your quota (if you are running in a subdirectory with a quota). Alternately you could run a cron or at job that would periodically run this find command to remove core files. But what you really want is to get the core files created in a directory that will be automatically purged and where you don't have a quota.

Typically the /tmp or /usr/tmp directories are regularly purged and on some systems these directories don't have a quota, thus this seems like an ideal place to create your core files. Be aware though that some machines are configured with quotas on /usr/tmp and /tmp, and the purge policy for these directories varies. To find out all the file systems where you have a quota, execute quota -v .

To cause your mppcore files (and this can be extended to core files as well) to be created in, for example /usr/tmp, make an mppcore file in /usr/tmp (preferably in a subdirectory you own, so you won't interfere with someone else). Then create a soft link from the directory where you will run your code. The commands are:


        touch /usr/tmp/mydir/mppcore            # Create the mppcore file
        ln -s /usr/tmp/mydir/mppcore mppcore    # Link to the mppcore file

If you aren't running on the MPP side of the T3D, and if you don't ever want your core files, you can set the environment variable TRBKCORE to 0 ( setenv TRBKCORE 0 , if you use the C shell) which indicates core files are not to be created. Another method that has been used is to make a core file of size 0 (run touch core ), remove owner write privilege for this core file (execute chmod 400 core ), such that creation of the core file is not allowed.

Our T3D has a YMP front end, which is configured with an extended core mechanism (which can be selected when the kernel is built). Because of this, core files created on our YMP are named core.pid where pid is the process id. Removing these files automatically is more difficult, since the name varies (unless the TRBKCORE environment variable is used). To automatically do remove these core files requires changes to the code to call getpid to get the process id, then you can create a link called core.pid, linking it to a previously created /usr/tmp/mydir/core file.

REPLY #2

The easy way...


 touch core
        touch mppcore
        chmod 000 core
        chmod 000 mppcore

It will not be able to over write a non-writable file.

Another thing, I am not sure has been documented is the file limits on Unicos 9.0, specifically affecting T3D users.

As of Unicos 9.0 the default number of files a user can have open is (I believe 64). There are 2 things that need to be done to change this:

  1. Change the UDB entries, the max is 5192

    
         pfdlimit[b]     :5192:
         pfdlimit[i]     :5192:
    
  2. 
    The user needs to run /usr/bin/limit -p 0 -f <# of files>
    

how does this affect T3D users?

Say a user has a 32 PE job, and this person opens 1 file on each PE. (Then they by default also opened, stdin, stdout, stderr) The persons file count is now:


     32 * 4 = 128 files and they are over the default limit.

REPLY #3

Best way we suggest is to create a symbolic link to /dev/null in the PWD or wherever the core file may be written to.

ie.


ln -s /dev/null mppcore
ln -s /dev/null core

These can easily be removed and allow corefiles to be written when they are required.


Q: In 'vi' is there a way to tell which column the cursor is in?  (Say
   you're editing Fortran and hope the line ends before column 72.)

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top