This publication is unavailable to your account. If you have more privileged account please try to use it or contact with the institution connected to this digital library.
Saini Subhash
artykuł
Talcott Dale, Yeung Herbert, Myers George, Ciotti Robert
Many large-scale parallel scientific and engineering applications, especially climate modeling, often run for lengthy periods and require data checkpointing periodically to save the state of the computation for a program restart. In addition, such applications need to write data to disks for post-processing, e.g., visualization. Both these scenarios involve a write-only pattern using Hierarchal Data Format (HDF) files. In this paper, we study the scalability of CXFS by HDF based Structured Adaptive Mesh Refinement (AMR) application for three different block sizes. The code used is a block-structured AMR hydrodynamics code that solves compressible, reactive hydrodynamic equations and characterizes physics and mathematical algorithms used in studying nuclear flashes on neutron stars and white dwarfs. The computational domain is divided into blocks distributed across the processors. Typically, a block contains 8 zones in each coordinate direction (x, y, and z) and a perimeter of guard cells (in this case, 4 zones deep) to hold information from the neighbours. We used three different block sizes of 8 × 8 × 8, 16 × 16 × 16, and 32 × 32 × 32. Results of parallel I/O bandwidths (checkpoint file and two plot files) are presented for all three-block sizes on a wide range of processor counts, ranging from 1 to 508 processors of the Columbia system.
Poznań
OWN
2006.07.12
application/pdf
oai:lib.psnc.pl:602
eng
Jul 31, 2014
May 28, 2014
179
https://lib.psnc.pl/publication/774
RDF
OAI-PMH
Saini Subhash Chang Johnny, Hood Robert, Jin Haoqiang
This page uses 'cookies'. More information I understand