Technical Working Group Meeting, May 2020

Minutes

Date: 20th May, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Andrew Kiss (AK) COSIMA ANU, Angus Gibson (AG) ANU
  • Russ Fiedler (RF) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • James Munroe (JM) Sweetwater Consultants

ACCESS-OM2-01 scaling experiments

See PL’s associated scaling doc, scaling spreadsheet and python notebook
PL: Scaled MOM5 and CICE5 by same amount. Based in 01deg_jra55v13_ryf9091. Run an initial run to get restart output from February 1900. Restart runs for February (28 day). 540s time step. 4480 steps. No diagnostic output. Left ice as is.
PL: CICE scaled ncpus and ntask proportionally. Scaled MOM from 80×75 (4358) to 160×150 (16548). Scaling ok looking just at ocean timer and ice timer. Didn’t have daily iCE output.
PL: Most efficient at 10K cores with total wall time. Ocean timer shows perfect scaling. CICE only timer also shows good scaling.
NH: Keen to try these configs in production.
PL: Not sure how appropriate for production, no IO. NH: Good place to start, turn on output and see how it goes. Looks well balanced. Somewhat surprised.
PL: Now trying to reproduce Marshall’s figures from the report. Scales ocean and ice separately. Yet to get reproduction runs going. Working through namelist differences. Sometimes get a silent hang. Worth scaling ocean and ice at same time.
AH: Why do both models scale so well but overall not so well when combined. RY: CICE is waiting for MOM. Maybe some more optimal setting for CPU numbers?  AH: Seems odd MOM scaling better than CICE, but CICE waiting for MOM.
NH: CICE is waiting less for ocean as cpus scaled. oasis_recv is constant, which means MOM not waiting on CICE. Definitely don’t want MOM waiting for CICE. RY: If increase MOM and reduce CICE would we get better performance? PL: Not sure. Might be useful to know how I got those numbers, using log file and figures are total time divided by number of steps.. RF: Output from access-om2.out.  Just summary, won’t show load balance with MOM. PL: Any guidance would be useful. RF: Look in access-om2.out.
AH: Look for MOM timers. Might be some information about range of values, could be some very slow PEs masked by average. RF: the check mask is out of date. Has 12-16 processors which are purely land. Changed land mask and didn’t update mask. Some processors only have values in the halo boundaries. Crashes otherwise.
PL: Regenerated new mask files. Numbers should agree with what was done. Any more advice would be welcome. Send email or talk on slack. RF: I’ll look at CICE layouts and balance, and masks. CICE Is also seasonally dependent.
PL: Moving to a more conventional experiment payout. Will move to a shared location. AH: Could put in /g/data/v45.
AK: CICE scaling with serial IO. Nic almost finished PIO. Will stop scaling without PIO. Runs much faster with parallel even with monthly outputs. AH: Seems to be scaling ok. AK: Any output written? AH: Running for a month, so should be some output.
AH: Ran from initial conditions? PL: Yes. Ran for 1 month with timestep of 300s. Then ran from those restarts with timestep of 540s. AH: There is an ice climatology? RF: If run for a month, should have generated ice. AK: Ice generated from surface temperature in initial conditions.
RY and PL left meeting.
AH: Maybe a bit more to look at in PL runs. NH: May have misunderstood where those numbers came from. RF: Looked like it was scaling nice and linear. AH: Yes for each model, but together scaling died going to 20K. RF: Not sure these results are that useful when IO is turned on. Code paths not currently going through without IO. Putting stuff on density levels. And a whole lot of globals/collectives that aren’t being done. AH: Encouraging though. NH: In principle can scale up.

PIO compilation in ACCESS-OM2

NH: Got a reply from NCI. Resistance to having PIO in a module. Best to be self sufficient. If it turns out to be an issue can address later. Will make a submodule. Clean up the build process. Changes to CICE repo. One CICE namelist change, tell it to not explicitly use netCDF for certain things. Bit odd.
NH: Experiment repos will require updates. Maybe AK will report some more realistic performance numbers.
AH: PIO with MOM? NH: Not sure. CICE isn’t doing a great deal in the configuration I am using. Seems to all work inside parallel netCDF as doing output from all processors. Can use IO nodes and use comms, but doesn’t show performance improvement, and looks worse in many cases. We could configure in the same way without using PIO. RF: Don’t have much control where we put processors. CICE at the end. Probably sharing with MOM. Playing with layout might be tricky. NH: At some stage put CICE on all it’s own nodes. RF: Once YATM is on the first node, it ends up messing things up. NH: Why are we doing that? RF: Something to do with OASIS in the old days. Now have YATM and root PE of MOM on same node. Would make sure all root PEs on their own node. No contention. YATM and MOM also on same NUMA partition. NH: We should change that, easy fix. YATM doesn’t do much on 0.1 as rest of the model takes so long. RF: Two IO processors on the same node. MOM root PE uses for diagnostics and YATM process. NH: If each model on their own node. Could make sure each node has a single IO processor. With PIO if want 1 node per 16 processors, don’t know if it is talking across nodes.
JM: In terms of PIO are multiple nodes writing to the same file? NH: For CICE very single process writing to same file at the same time. Works well. Haven’t looked into it deeply, probably the optimum is something in between. Still a big improvement over serial output. AH: Kaizen (改善): small incremental improvements all the time. Compressed netCDF output? NH: No. PIO GitHub talked about supporting compression. AH: Same as what RY and Marshall did? NH: Yes. Have to wait for parallel netCDF implementation which supports it. Confusing because there is also p-netCDF. PIO is a wrapper. AH: Yes, wraps p-netCDF and MPI-NetCDF. p-netCDF is only netCDF3, not based on HDF5. AK: Will need post-processing compression step. NH: Task not done until compression done. AK: Very sparse data, shame not to compress.
AH: xarray is supporting sparse data now. FYI. Can mean a lot less memory use for some data.

Compiling with/without WOMBAT

AH: Any speed/memory use implications to always have it compiled in? RF: Should be separate. Overhead basically nothing. Will only allocate BGC arrays if they’re in the field table. Should be kept separate like all other BGC packages. I put in some lines in the compile scripts. Also f you want to compile without ACCESS.
AK: If want to maintain harmony with CM2 want a non-BGC compilation? RF: Yes. AK: From the point of view of OM2 users would be nice to be able to switch BGC on and off just through namelists. RF: Switched on via field_table. Strange design choices years ago. Also need changes in some of the restart files, o2i.nc and i2o.nc. AK: Not something that can be switched on and off? RF: No.

MOM Pull Requests

AH: Guidance for checking? RF: Two main changes in the code are probably fine. Maybe the ACCESS compilation scripts. Unless want to change that it gets compiled in all the time for ACCESS-OM.AH: Decided not I think. RF: Made changes to install.sh to specify the type of model. AH: Separate model designation with WOMAT? RF: ACCESS-OM-BGC is a new model type. Run tests, all ok. AH: Do we need any tests to check it hasn’t changed non-BGC tests. RF: Shouldn’t be anything that effects a normal run. Code compiled ok on travis. Put in some heat diagnostics, the fluxes from CICE, might be the only thing. AH: Are Jenkins tests working? NH: ACCESS-OM2 tests haven’t worked since moved to the new machine. RF: Run a 1 degree model and see how it goes. AH: I’ll do that.
AK: Managing ACCESS-OM2, should the distinction between BGC and non-BGC be in the control directories. So build script builds both and choose which in the config, or compile once, supporting both. AH: I don’t think BGC is a supported configuration yet. Needs testing. How it is implemented, shared or separate exes is just a choice of how you decide is the best.
AH: Turns out that Geos PR was a mistake. Asked about it, and they closed it.

Bad bathymetry

AH: Any comments? Does it need fixing? RF: Bad bathymetry needs to be fixed, or copy bathymetry from somewhere else. Bad around Australia. Same for CM2. Mentioned it 3-4 years ago, still not updated. Some pits in Gulf of Carpentaria down to 120m in 0.25. 1 degree goes down to 80m. Should be no deeper than 60m. OCCAM created some bad bathymetry in Bass Strait, off coast of China. Russian and Alaskan issues, and White Sea. Remapping indices got mucked up. AH: Wasn’t 0.25 fixed north of Bering Strait? RF: Doesn’t look like it.
JM: Bathymetry files are wrong in certain regions? RF: Came from Southampton OCCAM model. They ran it with a normal mercator and a transverse mercator across the top. Remapping onto spherical grid indices got mucked up and got some strange bathymetry. GFDL inherited it and based a bunch of models on it. Leaked through to the ACCESS models. Was in the US forecast model and they noticed all the stuff around Alaska.
AH: Should be a relatively straightforward as this is only ocean bottom cells, and doesn’t touch coasts? RF: Yes. AK: Base on a coarsened tenth grid? RF: Not a big job, just a few slabs that need smoothing/removing. AH: Does this need to be fixed for the next release of OM2? RF: Yes. AK: No. RF: Get a student to look at it. AK: Also land mask inconsistencies, would be good to have all three models consistent. There are big curvy bits of coastline keeping ocean away from tripoles. AH: The 1 degree is very much a model, that isn’t that realistic. Tenth starts to look much more like real life.

Zarr file format

AH: Wanted to engage JM about zarr. RF: Interested as this is being used in decadal prediction project. JM: Exactly. Talked today about parallelising output from model into netCDF, and then post-analysis requires transforming to zarr. Zarr is a distributed file format that stores files in directories, each chunk is a separate file, parallelisation handled by filesystem. Should we write directly into zarr like file format. There are file formats like it. netCDF5 may have a zarr like back-end. RF: There is some discussion on the netCDF GitHub about zarr, looks like just one person. JM: Unidata is willing to move away from HDF5. Parallelisation of HDF5 has never worked the way it was supposed to. Instead of using parallel IO, just write directly to the format people want to use. AH: Got the impression netCDF people never got the buy-in from HDF5 that they thought would get. HDF5 just do their own thing. JM: Still have people using netCDF3. AH: A strength of netCDF, they could hop back end again and keep the same interface. JM: Same data model. AH: What is the physical format of a zarr blob? JM: It is a binary blob that supports different filters/compression schemes. AH: Does machine independent storage? Bad old days with swapping endianness on binary files. AG: In zarr there are raw data blobs, and associated metadata files that describe the filter/endianess etc.
JM: Inodes not a problem. Still relatively large, on the order of the lustre striping scheme. Can wrap the whole thing inside an uncompressed zip file. Parallelises for reading just fine. Works like a tar, index on where to read, supports multiple reads on same file. AH: Would want to do this when archiving.
NH: Another one is TileDB, which is a file format. JM: There are other backends, n5/z5. Distributed storage for large data sets.
AH: At one stage we did wonder if collation was even necessary with tools like xarray, but never looked into it. NH: Things have changed a lot. xarray is relatively new. 3-5 years ago might segfeault on tenth model data. So much better now, so many more possibilities.

 

Technical Working Group Meeting, April 2020

Minutes

Date: 29th April, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Andrew Kiss (AK) COSIMA ANU, Angus Gibson (AG) ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

Apologies from Peter Dobrohotoff.

JRA55-do v1.4 support

AK: Staged rollout. NH tagged some branches, so existing master tagged 1.3.0, using old JRA55-do v 1.3.1 using NH new exes which also support 1.4
AK: Also working on a new feature branch for 1.4. Same exes configured to use JRA 1.4 version. Seems to run ok. Not looked at output. Will look at that today. Once satisfied that is ok will move into master, tag 1.4.
AK: Also looking at ak-dev branch with a wide variety of changes. Once this is ok will tag with a new ACCESS-OM2 version. Will be new standard for new experiments. Good to make an equivalent point across repos.
AH: COSIMA cookbook hackathon showed value of project boards. Might be a good idea next time something like this attempted. AK: Tried, but it didn’t go anywhere.
NH: Two freshwater fields coming from forcing, liquid and solid. Both go into the ICE model which accepts one new forcing field. Get added together, solid magically becomes liquid without heat changes, passed straight to Ocean. Ocean and Ice models have also been changed to accept liquid part of land/ice melt and heat part of land ice melt. Exist but just pass zeroes. Extra engineering not being used as yet. A harmonisation step which takes us close to CM2 as coupled model uses these fields.
RF: With my WOMBAT updates incorporated this new code, could get rid off ACCESS-CM preprocessor directives.
NH: In the future can put work into calculating those fields correctly in the ice model. Not a huge amount of work. Will then have river runoff, land ice runoff and land ice melt heat.
NH: New executables have another change, support different numbers of coupling fields. Land/ice coupling fields are optional. At runtime figures out what coupling fields used. Dependent on namcouple being consistent. Coded internally as a maximum set of coupling fields. You can take coupling fields out but not add new ones. Possibly useful for others. Not a fully flexible coupling framework.
NH: Working on ak-dev branch. Harmonising namcouple files. Have a lot of configuration fields, but a lot ignored. Could use same namcouple in all configs, but in practice might leave them looking a little different. They include the timestep in them, but ignored. Could set to zero? AH: Or a flag value that is obviously ignored?
NH: Only three variables used in namcouple. Rest ignored, bust must parse properly. Needs cruft to make it parse. Never liked namcouple. Completely inflexible, values must be changed in multiple places.
AK: On version Oasis3-mct2, have they improved it in new version?
NH: Can now bunch fields together, pass a single 3d field instead of many 2d fields. Should improve performance. RF: Not through namcouple at all. Just a function call.
MW: What does OASIS do now? NH: Just doing routing. Which is done by MCT anyway. Remapping done by ESMF. Coupler meant to do 3 things, config, remap and routing. Made libaccessom2 do as much as possible automatically. So OASIS does very little. Still using API, so would require effort to remove.
MW: Know about NUOPC? NCAR is using it. NH: Coupling API. If all use the same API then can go plug and play. MW: MOM6 has a NUOPC driver. NH: In the future would to look at OASIS4, but probably just chuck OASIS, use MCT to do the routing and ESMF to do remapping. MW: NCAR dropped MCT. NH: MCT is a small team. AK: Something that would suit ACCESS-CM. Any critical things that rely on OASIS? MW: At mercy of UM. Probably still use OASIS due to Europe. NH: Not using ESMF, so using OASIS a lot more than we are. Might never change because of that. AK: Even moving to v4 would require coordination with CM2. NH: Nicer and cleaner, but no clear benefit.

Updated ACCESS-OM2 model configs

AK: 3 different tags. 1.3.1, 1.4 in works. ak-dev new tag. 1.4 intended to be minimal other than change in JRA55-do version. ak-dev making more extensive changes. Ussing mppccombine-fast for tenth. Output compressed data and use fast collation. Not worthwhile for 1 deg. With 0.25 output uncompressed and use mppnccombine to do compression. Hopefully output will be a reasonable size.
AK: If outputting uncompressed restarts might get large. Might want to collate restarts. Wanted to verify which run is collated: just finished, or the previous run? AH: Yes it is the restarts which are not used in the next run.
AH: Because quarter degree is not compressed won’t get the inconsistent chunk sizes between different sized tiles. Ryan had the problem when he had a io_layout with very small chunk sizes which made his performance very bad. mppnccombine-fast might be faster, will definitely use less memory. Still got compression overhead but memory use much reduced. AK: Not such a big issue as tenth. AH: Paul Spence had some issues with the time to collate his outputs. Maybe because they were compressed. Would recommend using AK: Fast version will always be faster? AH: Yes, at least no slower, but definitely uses less memory and will be much faster with compressed output.
MW: No appetite for FMS with parallel-IO? AH: Compression? Without it won’t bother probably. RY: Did some tests on parallel IO compression tests. Can’t recall results. Interested to try again. Requires a bit more memory. gadi has optane as storage or as memory. Interesting to test. Probably can use that for parallel compression or even just serial compression. Thinking about, but haven’t started. AH: Please keep us updated.
NH: Anyone have thoughts on CICE? Planning on parallel IO on CICE. Are we going to need a compression step? RF: With daily would like compression. Post processing to do compression on smaller number of PEs would be fine. Improving IO is critical for Paul Sandrey and Pavel. NH: Might need a post processing step similar to MOM. RF: Yes. Getting parallel IO is the most important. Worry about compression later. NH: Did a run yesterday with parallel IO. Completed successfully. Output was garbage. Was expecting to do heaps of work and segfaults. Surprised at that. RF: Misaligned or complete garbage? NH: Default assumption as bad as can be. Just used parallel-IO output driver on CICE. AK and RF realised daily CICE output was a bottle neck on 0.1 performance. As model code existed, decided to get working. RY: Parallel IO need to set up mapping correctly between compute and IO domains. NH: Should be part of the current implementation. Mapping is a tricky part of CICE. AK: Values out of range, so maybe not just a mapping issue? NH: Completely broken, but not segfaulting. Just getting it building was one hurdle. Also had to call the right initialisation stuff within CICE. Had to rewrite some of it that was depending on another library from one of the NCAR models (CESM). CICE is used with  CESM and they had a dependency on another utility library. Changed some code to remove dependence. Relatively positive. Library under active development and well supported. AH: Did they develop just for their use case, and maybe doesn’t support round-robin? NH: Not sure. We do know never been used in any other model than CESM.
MW: Ed Hartnett (PIO) eager to get into FMS. Also lead maintainer of netCDF4.

Status of WOMBAT in ACCESS-OM2

RF: Compiled. Next is testing. Up to current ACCESS-OM2 code changes. Had issues with submodules. AK: Previously libaccessom2 dependencies brought in through CMake, now moved to submodules. If you have an existing repo will have initialise submodules to pull in latest from GitHub.
RF: Made some changes to installation procedures. Can go between BGC version or standard ACCESS-OM. Want it to be different for BGC version. Changes to install scripts and hashexe etc. AH: Good that it is up to date, could have been an messy merge otherwise. RF: Will run tests today or tomorrow.

MOM5 PR from GEOS-ESM

AH: See this PR? Seemed a bit odd to me. First idea was to ask them to split the PR into science changes and config changes. RF: Looked like a lot of it was config changes. MW: Adding the GEOS5 stuff, which they shouldn’t. Code changes are challenging. Introduced a generic tracer, not sure what they’re doing with it. AH: Strategy? Ask them to wrap science stuff in preprocessor flags? MW: First step is to get config stuff out. Asked GFDL about it. GEOS are switching from MOM5 to MOM6. This must be associated with that effort to validate their runs. Maybe just giving back what it took to get it work. Maybe just makes his build process easier. AH: They have a specific requirement to use the same FMS library. Seems odd, as MOM5 and MOM6 are not likely to share FMS versions in the future. MW: Thorny topic, as it is not clear how FMS compatible MOM6 will be in the future. AH: Using FMS for less and less. MW: The PR needs to be cleaned up. AH: Also put in a CMake build system. MW: They need to explain more.
AK: Has conflicts, so can’t be merged at the moment. AH: Only going to get more conflicted. Which is why I was thinking they could split it up. I have a CMake build system in another branch, but never finished. if we can use theirs cool. I’ll engage with them.

Miscellaneous

AH: Been experimenting with graceful error recovery with payu. Can specify a script which can decide if the error is something you can just resubmit after. Mostly of interest to the production guys.
PL: Scalability testing with land masks, manifests, and payu setup. Supposed to be simpler but taking some time to get used to it. AH: Manifests are relatively new so some of the use cases have not been as well tested. MW: Are not all using manifests? AH: They are, but can be used in different ways. Tracking always works, but options to reproduce inputs and runs. Suggested PL could use reproduce to start a run. It was confounded by some restarts being missing, so not quite sure if it works as we would like. This is a very desirable feature, as it makes it very simple to fork off new runs from existing ones as well as making sure the files are consistent. PL: Working now. Next step is to change core counts and look for scalability numbers. AH: When I was doing scalability stuff for MOM-SIS I use input directory categories to isolate processor changes. Not quite doing that same thing anymore, but you can do something similar, but you won’t want use the reproduce flag if you are changing any of the input files.
AK: Just MOM scaling or CICE as well? PL: Just looking at MOM to begin with to see dependency and wait times. AK: CICE run time is critically dependent on daily outputs. Revelance to scaling data to production output. MW: Make sure your clock can tell them apart. In principle can distinguish compute from IO. AH: Daily output always part of production? AK: Ice modellers want very high temporal output. Ice is very dynamic. Even daily output not enough to resolve  some features. Maybe wait for PIO for CICE scaling tests? AH: I thought scaling tests always turned off IO? Can’t properly test scaling with daily output, as it dominates runtime.
NH: Would be nice to look at performance with and without PIO. PL: Will also look at CICE. Start with ocean model. AK: Were you (MW) running models coupled for paper scaling numbers? MW: Coupled. Not sure what IO was set to. Subtracted it and don’t recall it was large. Don’t recall a bottle neck, so might have had it turned off. RF: Wouldn’t be running with daily IO. Monthly IO doesn’t show up. MW: sounds likely.
AK: For IAF had a lot of daily CICE output. Not complete set of fields.
MW: Starting to run performance tests at GFDL and want to use payu. Has it changed much? Manifest stuff hasn’t made a big difference? Will have to get slurm working. Filesystem will be a nightmare. You moved PBS stuff into a component? AH: No, you did that. Not huge differences. Will be great to have slurm support.

Technical Working Group Meeting, March 2020

Minutes

Date: 18th March, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

Scalability of ACCESS-OM2 on gadi

(Paul’s report is attached at the end)

PL: Looking at scaling. Started with ACCESS-OM2, but went to testing MOM5 directly with MOM5-SIS. Using POM25, global 0.25 model with NYF forcing. The model MW developed for testing scaling prior to ACCESS-OM2. Had to add specify min_thickness in ocean_topog_nml.

PL: Tested the scaling of 960/1920/3840/7680/15360, with no masking. Scales well up to some point between 7680 and 15360.

PL: Tested effect of vectorising options (AVX2/AVX512/AVX512-REPRO). Found no difference in runtime with 15360 cores. MW: Probably communication bound at the CPU count. Repro did not change time.

MW: Never seen significant speed up from vectorisation. Typically only a few percent improvement. Code is RAM bound, so cannot provide enough data to make use of vectorisation. Still worth working toward a point where we can take advantage of vectoeisatio.

PL: Had one “slow” run outlier out of 20 runs. Ran 20% slower. Ran on different nodes to other jobs, not sure if that is significant. MW: IO can cause that. AH: Andy Hogg also had some slow jobs due to a bad node. AK: Job was 20x slower. Also RYF runs become consistently slower a few weeks ago. MW: OpenMPI can prepend timestamps in front of output, can help to identify issues.

PL: Getting some segfaults in ompi_request_wait_completion, caused by pmpi_wait and pmpi_bcast. Both called from the coupler. NH: Could be a bad bit of memory in the buffer, and if it tries to copy it can segfault. PL: Thinking to run again using valgrind, but would require compiling own version of valgrind wrapper for OpenMPI 4.0.2. Would be easier to Intel MPI, but no-one else has use this. Saw some cases similar when searching which were associated with UCX, but sufficiently different to not be sure. These issues are with highest core count. MW: Often see a lot of problems at high core counts. NH: Finding bugs can be a never ending bug. Use time wisely to fix bugs that affect people. MW: Quarter degree at 15K cores would have very small tile sizes. Could be the source of the issue. AH: This is not a configuration that we would use, so it is not worth spending time chasing bugs.

PL: Next testing target is 0.1 degree, but not sure which configuration and forcing data to use. Will not use MOM5-SIS, but will use ACCESS-OM2 for direct comparison purposes. AK: Configurations used in the model description paper have not been ported to gadi. Moving on to a new iteration. Andy Hogg is running a configuration that is quite similar, but moving to new configurations with updated software and forcing. Those are not quite ready.

PL: Need a starting configuration for testing. Want to confine to scalability testing and compiler flags. NH: ACCESS-OM2 is setup to be well balanced for particular configurations. Can’t just double CPUs on all models as load imbalance between submodels will dominate any other performance changes. Makes it a problematic config for clean configurations for things like compiler flags. MW: Useful approach was to check scalability of sub-model components independently. Required careful definition of timers to strategically ignore coupling time. MOM was easy, CICE was more difficult, but work with Nic’s timers helped a lot. Try to time the bits of code that are doing computation and separate from code that waits on other parts. Coupled model is a real challenge to test. Figure out what timers we used and trust those. Can reverse engineer from my old scripts.

PL: Should do MOM-SIS scalability work? MW: Easier task, and some lessons can be learned, but runtime will not match between MOM-SIS and ACCESS-OM2. Would be more of a practice run. PL: Maybe getting out of scope. Would need 0.1 MOM-SIS config. RY: Yes we have that one. If PL wanted to run ACCESS-OM2-01 is there a configuration available? AK: Andy Hogg’s currently running configuration would work. PL: Next quarter need to free up time to do other things.

MW: Might be valuable to get some score-p or similar numbers on current production model. Useful to have a record of those timings to share. Scaling test might be too much, but a profile/timing test is more tractable. RY: Any issues with score-p? Overhead? MW: Typical, 10-20%, so skews numbers but get in-depth view. Can do it one sub-model at a time. Had to hack a lot scripts, and get NH to rewrite some code to get it to work. score-p is always done at compile time. Doesn’t affect payu. Try building MOM-SIS with score-p, then try MOM within ACCESS-OM2. Then move on to CICE and maybe libaccessom2. PL: Build script does include some score-p hooks. MW: Even without score-p MOM has very good internal timers. Not getting per-rank times. score-p is great for measuring load imbalance. AH: payu has a repeat option, which repeats the same time, which removes variability due to forcing. Need to think about what time you want to repeat as far as season. AK: CICE has idealised initial ice, evolves rapidly. MW: My earlier profile runs had no ice, which affects performance. MW: Not sure it is huge, maybe 10-20%, but not huge.

MW: Overall surprised at lack of any speed up with vectorisation, and lack of slow-down with repro. PL: Will verify those numbers with 960 core config.

AH: Surprised how well it scaled. Did it scale that well on raijin? MW: The performance scaling elbow did show up lower. AH: 3x more processors per node has an effect? MW: Yes, big part of it. AH: 0.1 scaled well on raijin, so should scale better on gadi. 1/30th should scale well. Only bottleneck will be if the library can handle that many ranks.

NH: If repro flags don’t change performance that is interesting. Seem to regularly have a “what trade off does repro flags have?”, would be good to avoid. MW: Probably best to have an automated pipeline calculating these numbers. NH: People have an issue with fp0 flag. MW: Shouldn’t affect performance. NH: Make sure fp0 is in there. MW: Agree 100%.

ACCESS-OM2 update

AH: Do we have a gadi compatible master branch on gadi? AK: No, not currently. NH: At a previous TWG meeting I self-assigned getting master gadi compatible. Merged all gadi-transition branches and tested, seemed to be working ok. Subsequent meeting AK said there were other changes required, so stopped at that point. gadi-transition branches still exist, but much has already been merged and tested on a couple of configurations. Have since moved to working on other things.

NH: Close if AK has all the things he wants into gadi-transition branch. Previous merge didn’t include all the things AK wanted in there. Happy to spend more time on that after finishing JRA55 v1.4 stuff.

JRA55-do v1.4 update

NH: Made code changes in all the models, but have not checked existing experiments are unchanged with modified code.

NH: v1.4 has a new coupling field, ice calving. Passing this through to CICE as a separate field. In CICE split into two fields, liquid water flux and a heat flux. MOM in ACCESS-CM2 already handles both these fields. Just had to change preprocessor flags to make it work for ACCESS-OM2 as well.

NH: Lots of options. Possible to join liquid and solid ice at atmosphere and becomes the same as we have now. Can join in CICE and have a water flux but not a heat flux.

Strange MOM6 error

AH: A quick update with Navid’s error. Made a little mpi4python script to run before payu to check status of nodes, and all but root node had a stale version of the work directory. Like it hadn’t been archived. Link to executable was gone, but everything else was there. Reported to NCI, Ben Menadue does not know why this is happening. Also tried a delay option between runs and this helped somewhat, but also had some strange comms errors trying to connect to exec nodes. Will next try turning off all input/output can find in case it is a file lock error. Have been told Lustre cannot be in this state.

MW: In old driver do a lot of moving directories from work to archive, and then relabelling. Is it still moving directories around to archive them? Maybe replace with hard copy of directory to archive. MOM6 driver is the MOM5 driver, so maybe all old drivers are doing this. Definitely worth understanding, but a quick fix to copy rather than move.

NH: Filesystem and symbolic links might be an issue MW: Maybe symbolic links are an issue on these mounted filesystems. AH: There was a suggestion it might be because it was running on home which is NFS mounted, but that wasn’t the problem. MW: Often with raijin you just got the same nodes back when you resubmit, so maybe some sort of smart caching.

 

Scalability of ACCESS-OM2 on Gadi – Paul Leopardi 18 March 2020

 

 

Technical Working Group Meeting, February 2020

Minutes

Date: 27th February, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

New installed payu version

Version 1.0.7 is now installed in conda/analysis3-20.01 (analysis3-unstable

AH: payu is now 100% gadi compatible. Default cpus/node is now 48 and memory 192GB/node. Python interpreter, short path and manifests are scanned to automatically determined from model config and manifests. Using qsub_flags to manually specify storage flags no longer works, as automatically determined storage flag option is appended and the manually specified one no longer works.

RF: Paul Sandery having issues getting 0.1 deg model working. [AH: turns out it was a typo in config.yam]

AH: No need for the number of cpus in a payu job to be divisible by the number of CPUS in a node. Request however many the job uses, and payu will pad the request to make sure the PBS submission is requesting an integer number of nodes if ncpus is greater than the number in a single node. PL: Rounds up for each model? AH: No, just the total. MW: Will spread models across ranks, so a rank can have different models on it.

AH: Andy Hogg ran out 80 odd submits with the tenth model. Occasional hang, resubmit ok. Might be more stable than raijin.

AH: Navid has MOM6 model that cannot run more than a couple of submits without it crashing with an error that it cannot find the executable. Weird error, let me know if you see anything similar.

NH: Caution with disks and where to put things. Reading input files can be very slow sometimes, or not, and then files not there and turn up later. If executable is missing, running off a disk that is not good? MW: Filesystems are very complicated on gadi? NH: Less certainty of performance with such a different system with data file systems being mounted separately. I’d look at this.
PD: Good place to look if disk has got caught up doing too many tasks. gdata just hangs, saving text file takes a while. Due to being on login node? Get similar delays with interactive job on execute node.
AH: People reporting issues with login delays. Probably a disk issue? Navid’s job is not being run from gdata, but from scratch. Inclined to blame new system of mounting. Could we use jobfs. MW: Like in the old days when we ran on the node? Good luck! AH: Could just do some tests. NH: Concerning if scratch is slow.
AH: Not sure if filesystems are mounted with NFS. MW: That is what we do on gaia, and have tons of problems with mount on demand. Biggest frustration with using GFDL machine. It’s a nightmare. At least NCI have lustre know-how. AH: Used to have a lot of problems with NFS cache errors in the past, files disappearing and reappearing. Does sound similar to Navid’s problem.
MW: Raijin’s filesystem was quite good. Why the change? AH: Security. Commercial in confidence stuff. I think it is overblown. Can’t seen anyone else’s jobs on the queue. Can’t even check it other people are running on the project. Are moving to 2-factor auth also.

What is required to get gadi transition into master for ACCESS-OM2

AH: Andrew Kiss is on personal leave but sent around an email:
re. gadi-transition, we could proceed like so:
– we’ve also been transitioning libaccessom2 to use submodules for its dependencies instead of cmake https://github.com/COSIMA/libaccessom2/issues/29 which would require this commit https://github.com/COSIMA/libaccessom2/tree/53a86efcd01672c655c93f2d68e9f187668159de (not currently in gadi-transition branch)
– get the libaccessom2 tests working https://github.com/COSIMA/libaccessom2/issues/36
– there’s a gadi-transition branch libaccessom2, cice and mom that could be merged into master. They use openMPI4.0.2
– there’s also a gadi-transition branch for all the primary (ie JRA, non-minimal) configurations but the exe paths would need to be updated before merging to master
– the access-om2 gadi-transition branch would then need to be updated to use the correct submodules for model components and configurations. We also want to remove the core and minimal config submodules https://github.com/COSIMA/access-om2/issues/183
also fyi the current gadi build instructions are here
AH: Feels urgent that people can use on gadi. Any comments on Andrew’s email?
PL: Transition to submodules finished? AH: That is on a separate branch. NH: I did that work. Put it in a dev branch. Not intending to be part of gadi transition to have least number additions. AH: Agree if that is the easiest. Master is broken for gadi, so anything that works is an improvement. If there is no feedback can do this offline. Could make a project to be explicit about what is required. NH: Given that gadi-transition does work. Andrew and Andy use it. Wouldn’t hurt to put it in now. Work that PL has done to make sure it does reproduce ticks that box. So ready to go. Able to reproduce if we need to. I’ll merge it and do some interactive testing. Then people can use it and I can do automatic testing.
PL: What branch will it be merged into? A lot of branches in a lot of repos.
NH: Isolate gadi-transition branches and merge into master straight away. Not bother with other development branches at this stage. Want to get something in master that people can use. In future bring everything into dev as discussed, with master staying stable, just bug fixes, until decide to update from dev. I’ll go through the branches and just bring in the gadi transition stuff. PL: So dev will have submodule changes and master will not? NH: For the time being. With previous discussion we’ll be slower moving on master, to make sure it is working. Having dev will allow us to move that more rapidly. People can run off dev at their own risk. AH: Submodules will remain a named feature branch and pulled into dev at some future time. Should discourage having personal development branches on the main repo. If you want to experiment do it on your own fork. Branches on the main repo should be master, dev or named feature to keep it clean and everyone can understand what they mean.

Stack array errors and heap-array option

AH: Apologies minutes from last TWG meeting are not on the COSIMA website. There is an IT issue with the server. We wanted to follow up with stack array errors.
AH: Did ever test on raijin with same compiler? Is there any way we can do comparative test? Use raijin image? Any more from Dale about this stack stuff? PL: Haven’t heard anything. AH: Last meeting some mention of there being a limit on UM stacksize. RY: Already fixed Ilia’s issue. Fixed by making stacksize unlimited. RF: Always run with unlimited stack size. When had problem only fixed by setting heap arrays small or zero. When I went into code and made array allocation from automatic to allocatable the error went away.
MW: If I have an automatic array I get three different heap allocations for three different compilers. RF: This option forces all arrays on to the heap.
AH: This was fixed a while ago Rui? RY: Not clear this is the same problem. Ilia’s issue was the end of 2019 when gadi first on line. Not sure it is the same issue.

BGC Update

AH: Russ forwarded an update to Andy Hogg.
RF: Work was completed on raijin in 2019. BGC code in to MOM and CICE. Required changes in CICE: moving arrays around to different modules due to scope issues which allow optional fields to be sent. Main one is to send 10m winds to ocean, not just the wind stress. Holding off to issue PR until gadi transition done so could go in clearly.
NH: Will be useful for JRA1.4 work.
RF: Hakase will be using it for BGC. Passing algae between ice and ocean components. To add new field, need to add field to code, but don’t have to be passed. Just picked up from namcouple using the flags in OASIS to see if it’s registered.
AH: Can this be the next cab off the rank after gadi-transition, before AKs science tweaks. Not relying on any changes in Andrews branches? RF: Would like to get gadi transition out of the way and then test these changes. Not tested on gadi yet.
How to proceed? Testing?
I’ve held off issuing a pull request until the dust settles wrt the gadi transition. There’s a bit of code rearrangement in order to allow optional fields (10m wind speed but this can be extended) to be passed from CICE.
The flags ACCESS-OM-BGC (tested) and ACCESS-ESM (untested) enable compilation of the BGC code. The 10m winds need to be added to the namcouple files and the MOM coupling fields namelist.
Work done on raijin last year. Changes in CICE to move arrays around in modules due to scope issues. Main one is to send 10m winds to ocean. No just wind stress. Holding off until gadi-transition done.
NH: Useful for stuff I’m doing with JRAv1.4.
RF: Hakase will use for BGC, passing algae between ice and ocean components. Have to change code to add fields. Don’t need to hard code as much. Once field in there optional to pass. Using the OASIS flags to see if registered.

JRA55-do counter-rotating cyclones

RF: Fortunately Paul Sandrey’s started in 1988. Last reverse cyclone in 1987. Cafe 60 use whole month window, so washed out on the average.
One of the RYF runs has reverse cyclone (83-84). Tell Kial.

Scaling

PL: Thanks to Marshall for getting me up to speed on scaling tests and sharing scripts. Can reproduce diagrams so can compare between raijin and gadi.
 AH: Any more performances numbers? PL: Now in a position to answer questions, just need to know what questions to ask.
AH: ACCESS-OM2-01 currently running around 5K cores, would love to be able to scale to 10K, 20K even better. MW: MOM scaled to 50K. AH: CICE doesn’t scale as well. MW: Any work on CICE distributions? RF: Nope. Would need to be done again at higher core counts. MW: Current one working really well. AH: On NH’s to-do list was to experiment with layouts and load balancing. MW: Alistair is very interesting in load balancing sea ice models. Particularly icebergs. Has some quasi lagrangian code in SIS2 to load balance icebergs. Maybe some ideas will translate or vice versa.
PL: For the moment will just look at MOM and see how it scales at 0.1? AH: Maybe just try doubling everything and see if it scales ok? MW: Used to make those processor heat maps to get the load imbalance of CICE. Would be good to keep an eye on that while working with scaling. Tony Craig (CICE developer) is very interested.

 Atmosphere/coupled models

 PD: Still using code frozen for CMIP runs. Extending number of runs in ensemble.
AH: People in CLEX are keen to run CM2. PD: Not aware, maybe through someone else, maybe Simon or Martin? CM2 and ESM-1.5 runs have been published under s38 project.
AH: Scott Wales doing an ultra high resolution atmosphere run over Australia, under  the STRESS2020 project. PD: Atmosphere only, do you know what resolution? I’ve also done some high res atmosphere only runs. On a project to improve turbulent kinetic energy spectrum in UM. Working on code to put stochastic back scatter into low res N96 (CMIP6) atmosphere. Got some good results injecting turbulent kinetic energy into small scales to improve artificial dissipation associated with semi-lagrangian timestep in UM. To test this is to see how improved N96 results compare to N512 runs using STRESS2020 resources. Working with Jorgen Fredrikson. Should talk to Scott.
AH: At the moment Scott is targeting 400m over Australia. PL: Convection resolving? AH: Planning a 2 day run to simulate Cyclone Debbie. Nested 400m run for Australia, inside BARRA at 2.2km. 10500×13000. PD: We’re going global. MW: How many levels? Same as global? PD: 85. AH: Major problem is running out of memory. MW: More cores should mean less memory. Maybe their Helmholtz server imposes some memory limit on the ranks. AH: Currently waiting for large memory nods to come online.

New FMS

MW: New FMS version coming. Targeting auto tools and getting rid of mkmf. If you’re on MOM5 you can use your frozen version. Completely rewritten IO in FMS. Now a thin wrapper to netCDF. No more magic functions like save_restart, write_restart. They have been replaced by lower level ops to allow model developers to have more control. Not sure MOM5 significance. AH: API compatible? MW: Keep compatible with old API as long as they can. Could dump it in and slowly integrate. Only raising in case you want to do more innovative stuff with IO. PL: Affects MOM6 mainly? MW: MOM6 is one of the main targets. PL: Parallel IO support? MW: Part of the reason. They want parallel IO in atmosphere model which NCAR now uses it. Now an important model. This implements the hooks for that work. RY: MPI-IO still there or be replaced by PIO? MW: It is. RY: Simpler to do one? MW: They’ve sent a patch to get MOM6 working with that now. Doesn’t work currently. Not sure about the progress, but know you were interested in PIO. RF: We’re interested from the ICE point of view. New version of BRAN will need daily inputs in CICE. Performance is terrible as IO is collected on to one processor.  MW: FMS will not help CICE, but a test case if PIO is a valid solution.

Technical Working Group Meeting, November 2019

Minutes

Date: 27th November, 2019
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) ANU,  Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

ACCESS-OM2 on gadi

PL: Submodules not updated (#176). Reported bug from CICE5 but not being built. AK: not sure how to release this. Sometimes model components updated but not tested. AH: gadi transition branch? AK: Yes. PL: Science bug.
PL: To test had to copy files around. Needed to update config.yaml and atmosphere.json. Made fork of 1deg_JRA55_RYF for testing. Had to move to non-public places as don’t have access to public places. Will send details in an email.
PL: conda/analysis3-unstable needs to be updated, payu not working on gadi. AH: Did update, still not working. Update only tested on interactive job. PBS job strips out environment. Wanted to consult with Marshall about why payu works as it does currently. Difficult to debug as payu-run as it does not have the same environment as “payu run”. PL: Work-around to add -V option to qsub_flags in config.yaml. AH: This is what I am considering to change payu to by default. Not sure. Currently looking into this.
PL: nccmp module not on gadi. Been using for reproducibility testing. In backlog. RY: Can install personally, don’t have to wait for system install.
PL: Running on gadi. Got 1 deg RYF55 finished. Did not have mppnccombine compiled. Will have to do this to get this working correctly. Got something for baseline for comparison. Report by the end of the week.
RY: gadi 48 cores. Default based on broadwell (28 cores). Do you have an up to date config? Paul currently changes core count in his config, but is it done in official config?
AH: I was in the process of making an official configuration for gadi. Copied all inputs that were in /short/public to the ik11 project. Once directory structure finalised will make a config that runs, update on GitHub, and look at making the same changes for other configs. Make an exemplar config with those changes. RY: Should work on same configs.
RY: Anyone else running on gadi? AH: No.
AH: What are the impediments to others updating ACCESS-OM2 on GitHub? People not sure if they can? How they should go about tit? AK: Put my hand up to do this. Other model components also need updating. AH: Maybe dev branch that everyone pulls from. Easier to make changes without worrying about breaking. So everyone working from the same version and don’t have to re-fix known bugs.
AH: Environment stuff? MW: Something about python exec command. Nuance? Wholesale copy everything? Wanted to create idealised processes, rather  than depend on what users haves stop. payu run submits job to PBS with whole new environment. Explicitly give environment variables.
AH: Drawback payu-run does not use same environment as payu run. MW: Not launching a process. payu run submits to PBS and starts posix process with defined environment. Exception when explicitly give it environment variables. AH: One work-around is to make list of environment variables want to keep. Losing MODULEPATH variables. PL: module env being used by payu required modules 3. Modules 4 works differently. Python code from modules 4 may work better.
MW: Fixed? AH: Thought I had, but was fooled because using payu-run. MW: If you set MODULEPATH locally, it won’t be exported to payu run process.
PL: What is the fix? MW: On raijin there was a bootstrap script in init dir, which sets everything. I duplicated those commands and put them in the payu module that did equivalent bootstrap. If moving to gadi and it is different none of that bootstrap script works. PL: Bootstrap script there, but completely different. MW: Was old version, and never actually used the bootstrap script. Maybe exec the bootstrap script they provide? AH: Or pass through environment variables that are set already. MW: Do whatever you think is best. Did try and make it so ‘payu run’ job was clean and always looked the same regardless of who submits. If we take entire ENV and submit to run, every run will be different. One variable is a controlled solution. Solution should be possible to have job on submitted node can set it up on it’s own. Should get it going and not be held up by my purist notions. AH: Try/except blocks can be used to support multiple approaches. MW: Definitely need to bootstrap the modules. PL: Sent through email with details.

OpenMPI/4.0.1 on gadi

AH: Angus reported openmpi/4.0.1 seems broken. Has this been fixed?
AG: Any wrapped commands (mpicc, mpifort) will print whitespace before output. In most cases ok, but can break configure scripts. Ben M knows about it, but not why.
PL: Divide by zero error in MPI_Init. MW: Remember that one UCX back-end, FP exception. Evaluates a log function when evaluating binary tree when working out communication. Ben M told them about it, but got nothing back. We use FP exception checking, but can’t ignore for just MPI. PL: Work-around like turn off UCX? MW: Could turn off FP exceptions. A race condition, so not every job sees it. RY: Can turn off UCX. Can use ob1 instead of UCX. Also try that. PL: Wasn’t sure it would work on gadi.
AH: Maybe 4.0.1 not a good candidate for testing? Get intermittent crashes.

Russ update on model performance on gadi

RF: Been testing OFAM bluelink, compiled as MOM-SIS without doing ice. Performance was fantastic. 2x faster than Sandy Bridge. Don’t get hammered with extra cost on new CPUs. Initialisation was very fast. A lot of files, so might be a low load issue. Dropped from 100s to 8s. Doing data assimilation runs, run 3 days at a time. 25% of the run time was init. Now pretty much zero. MOM5 performance was really good.
RF: Did notice some variation on start up of CM4. Still a lot faster. Reads in a lot more files and a lot more data. Still considerably faster than on raijin. MW: MOM has IO timers, do you have those on? FMS timers. Rui used them a lot. RF: No, didn’t turn them on.
RF: Running CM4 was about 15% faster than Broadwell. Improved but will cost a lot more for decadal prediction. RY: 15% is normal. Martin report UM is 30% quicker. RF: SIS2 load balance is bad. Probably a bunch of things being covered up. Needs more testing.
MW: Bob has never talked about SIS2 load imbalance. Presumably oblivious to them. RF: Would have to be. Regular layout would lead to many redundant processors. MW: Alistair has done some iceberg code load balance improvements. RF: Doesn’t take much time. Had to turn off iceberg stuff on raijin. netcdf stuff broke it. Might turn back on. Time spent in iceberg code minimal.

Stack array errors and heap array option

RF: When compiling need to set heap-arrays option in compiler, otherwise get segfaults with stack, even when stack set to unlimited. Wasn’t an issue on raijin. Happened for both MOM5 and CM4. PL: Dale mentioned about stack size limited to 8MB. RF: I unlimited stack size, so shouldn’t have been an issue. Got all sorts of issues with unmapped addresses. First one saw it was automatic so tried moving to allocatable, moved error. Then tried different heap-arrays size options, which moved error again. MOM5 dropped to heap-arrays 5KB. Same for CM4 but set to zero for SIS2 and it got through. Different models, seems ubiquitous. MW: Intel fortran?
MW: When compile and run on CRAY machines stack vars use malloc, so heap variables not stack. Same model, same compiler on laptop (gcc), same variables are stack variables. Is it possible moving from raijin to gadi something different about malloc. RY: CentOS 7 v 8 makes some difference. MW: Is kernel making some decisions on malloc? RY: Had similar issues with UM. Stacksize unlimited seemed to fix for UM. But Dale talked about this in ACCESS meeting, kernel changed something that caused this problem.
NH: Intel compiler has heap always arrays option. Useful in some cases. Models can have array bounds overruns, and easier to track when trash heap compared to stack. RY: Slower? NH: Depends. Doesn’t do it for everything, just the larger arrays. RF: If you just set heap-arrays, all on heap. Can control it. MW: In MOM6 explicit places we declare variables we know we won’t use, contingent on assumption they are stack vars. Can’t make those assumptions any longer.
NH: Surprised to hear linux kernel. Would think it was Fortran runtime or compiler. MW: runtime or libc. Couldn’t figure out why different results with same compiler on different platforms. NH: Calculating variables addresses, compiler computes stack offsets. Looking at the executable there are static offsets. Needs to be done at compile time. MW: Shouldn’t be running models that need to use heap. Should be resilient to either choice. No? NH: Comes down to algorithms used to manage memory. Heap has algorithm to minimise fragmentation. Don’t have an answer, will need to think about it.
MW: Can you send a bug report for SIS2? RF: Could be everywhere that has run out of stack space. Just the first one I tried to fix this.
AH: What OS are you running on your laptop? MW: Archlinux. Comparing them to the travis VMs. AH: At some point the compiler has to query the system to see what resources are available? MW: The fact that you’re typing stacksize unlimited shows you accessing the kernel. AH: Seems strange, system has plenty of memory. MW: I’m interested in this problem. AH: Problem should be reported to relevant NCI people (Dale/Ben?). Potentially affecting a lot of codes. Not tenable that everyone who has this issue have to debug it themselves. MW: Bad memory explicit in stack, buried in the heap? NH: Can make a huge difference. Layout of memory is different. More likely something on HEAP won’t affect other variables. More fragmented on stack. Heap memory more tightly packed. MW: Fixed a couple of dozen memory access bugs in MOM6 and they take it seriously. RF: Old versions I’m using with CM4 release. Happens with MOM5. Only FMS common. MW: Wondering if this is a bug that is hidden moving from stack to heap.
MW: Using GCC9.0 to find these. Few flags to find stuff. Initialise with NaNs. malloc-perturb is an environment variables you can turn on and that helps. Turns on signal NaNs. Any FP op generates an error now. Finds a lot of zeroes in bad memory accesses that didn’t trigger errors. Trying to not use valgrind, but that would work also.
RF: Switch in GCC that does something similar to valgrind. Puts in guards around arrays. MW: Don’t know the explicit option, using -Wall, turns it on for me. GCC9.0 is very aggressive at finding issues in a way that 5/6/7 were not.
AH: Same compiler on raijin and gadi, see if gadi only issue. RF: Not sure if it was the same version of 2019 I was using. AG: One overlapping compiler 2019.3. RF: Recently recompiled MOM-SIS build. Will look and see if it is the same. AH: Useful data point if same issue is gadi specific.

Update on BGC

AH: Andy Hogg has asked for an update. People at Melbourne would like to us eit. RF: On my desk with Hakase. Been promising. Will prioritise. Almost there for a while. Been distracted with gadi. On to-do list.
MC: Do we know who in Melbourne wants to use it? AH: A student, not sure who.

New projects to support COSIMA and ACCESS-OM2 on gadi

AH: /g/data/ik11 is where inputs that were on /short/public will now live. Not sure exactly how this will be organised. Will mostly likely have input and output directories. Might be some pre-published COSIMA datasets there. Part of a publishing pipeline. AK: Moving data from scratch to this as a holding area? AH: People were using datasets from hh5 that had no status, not sure how to reference them.
AK: Control directories are separate, and not well connected to the data on hh5. Nice to have ways to link things more firmly. AH: To-do for payu is have experiment tracking IDs. Generate UUIDs as unique identifiers for experiments. Will go in metadata file. Not linked to git hash. If they don’t exist, make new ones. AK: Have data on hh5 and the control directories have been moved or deleted. Lose the git history of the runs that were used to generate the output. AH: Nothing to stop that all being in the same directory. Nic has advocated this for some time. Could change the way we do things. AK: Not sure on solution, but flagging as an issue.
AH: Published dataset from the COSIMA paper is almost ready. New location for COSIMA published data will be cj50. To do this publishing have created a python/xarray tool to create published dataset from raw model data. Splits data into separate files for each variable, a year per file in most cases. Needs a specific naming convention for THREDDS publishing. Using xarray  it doesn’t matter what the temporal range of each model output file. Uses pandas style resampling to generate outputs. In theory simple, in practice there are many many exceptions and specific tweaks to be standards compliant. Same tool can handle MOM and CICE outputs, which are different models, and radically different file metadata and layout. If you have something that you might find it useful for it is called splitvar. Also made a tool called addmeta for adding metadata. Do the metadata modification as a separate step as it is always fiddly. Uses yaml formatted files to define metadata. The metadata for the COSIMA data publishing is available.
PL: Published data is netCDF format with all the correct metadata? AH: MOM doesn’t put much metadata in the files. To make this better connection between runs and outputs is to insert the experiment tracking id mentioned above into the files. Would be nice to put that into a namelist so that MOM could put it in the file. Best option, and if anyone knows how would like to know. Another option is a post-processing step, on all the tiled outputs. MOM isn’t the only model we run. Not all output netCDF. Would be nice if there was a consistent way for payu to do this. COSIMA published data should be up before the end of the year.
PL: Will ik11 replace hh5 and v45. AH: hh5 is storage space that is part of a ARC LIEF grant from the Australian climate community. The COE CMS team was tasked with managing this, and people could ask for temporary storage allocations. In practice it is harder to get people to remove their data. COSIMA was one of the first to ask for an allocation, but it somewhat outgrown the original intent of hh5, as it has been there for a long time and grown quite large. hh5 might still be used for some models outputs. Not sure. ik11 started because we needed somewhere to put common model inputs/exes because /short/public went away and /scratch/public is ephemeral. /scratch space is difficult to utilise because of the ephemeral nature. NH: Have some experienced /scratch space on Pawsey. Once you lose data you make sure you have a better system to make sure your data is backed up. Possibly a good thing. AH: Doesn’t suit the workflow people currently use, where they come back and run some more of a model after a break. Suits workflows that create large amounts of data and then do a massive reduction and only save the reduced dataset. Maybe suits ensemble guys. Our models everything we create we want to keep. NH: Doesn’t all the model output go to scratch. AH: Yes, but model output doesn’t get reduced, so end up having to mirror the data.

Technical Working Group Meeting, September 2019

Minutes

Date: 11th September, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX ANU,  Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Nic Hannah (NH), Double Precision

libaccessom2

AK: JRA55 v1.4 splits runoff into liquid and solid. Most elegant way to support? Have a flag in accessom2 namelist to enable combining these runoffs. NH: Is it a problem in terms of physics? Have to melt it? AK: Had previously ignored this anyway, so ok to continue. NH: Backward compatibility!
AK: Some interest in multiplicative scaling and additive perturbations to allow for model perturbation runs. NH: Look at existing code. Might not be too hard. AK: Test framework for libaccessom2? NH: When did scaling did longer to write test than make the code change. All there, could use as an example. Worth to run tests, don’t want to get it wrong. AK: Not familiar with pytest. NH: In this case just copying scaling test, modify, and get pytest to run just that test. Once got just  that test running and passing you’re done.
AK: New JRA55 now in Input4MIPS. Used JRA v1.3 from that directory and didn’t reproduce. AH: Correct. Didn’t work out why it wasn’t reproducing. AK: Ingesting the wrong files? Should be identical. AH: Never figured out what was wrong. Didn’t match checksums from historical runs. Next step was to regenerate those checksums to make sure the historical ones were correct. Could have been ok, but didn’t get that far.
AH: JRA55-do is now on the automatic download list, should be kept up to date by NCI. If it isn’t let us know.
NH: Liquid and frozen runoff backwards compat, but what about future? AK: Some desire to perturb solid and liquid separately, and/or distribute solid runoff. NH: Can we just put it somewhere and allow model to deal with it. AK: In terms of distributing it, not sure. Some people are waiting on this for CMIP6 OMIP run. Leave open for the future. NH: MOM5 doesn’t have icebergs? AK: No. Depoorter et al. has written a paper for meltwater distribution. Maybe use a map to distribute. RF: What they use for ACCESS-CM2. Read in from a file.
AK: Naming convention for JRA55 v1.4 has year+1 fields. Put in a PR some time ago. AH: Problem with operator in token? NH: Should be fine as long as within quotes. AK: Just a string search shouldn’t make a difference.
AK: Can’t get libaccessom2 to compile and link to correct netcdf library. Ben Menadue tried and worked ok for him. Problem with findnetCDF plugin for CMake. Not properly supported on NCI. Edited the CMake file to remove this, could find netCDF, but used different versions for include than linking. Should move to a newer version of netCDF. v4.7.1 has just been released. Have requested this be installed on NCI NH: Does supported include CMake infrastructure around library? If getting findnetCDF working was NCI responsibility that would be great. Difficult getting system library stuff working properly with CMake. CMake isn’t well supported in HPC environments. AK: Ben suggested adding logic to check and not use on NCI. NH: Definitely upgrade, to 4,7 if they install it.
AH: Didn’t Ben Menadue login as AK and it ran ok? AK: No, he didn’t do that as far as I know. AH: Definitely check there is nothing in .bashrc. Also worth checking if there is a csh login file that is sourced by the the csh build scripts.

OpenMPI testing

RY: OpenMPI 2,3,4 and Intel 2019. Consistent results between for all OpenMPI versions. 1, 0.25 and 0.1. Some differences between Intel 2017, not from MPI library. Not sure if difference is acceptable or not? Would like some help to check differences.
Just looking at access-om2.out differences. Maybe need to look at output file like ocean.nc? RF: Need to compile with strict floating point precision to get repro results. MOM is pretty good. Don’t know about CICE. Can’t use standard compilation options. fp-precise at a minimum.
RY: If this difference is not acceptable need to use flags to check difference between 2017 and 2019? RF: Once get a bit change, chaos and get divergence. RY: Intel 2017 still on new system. AH: So not only newest versions of modules on gadi? RY: 2017 will be there, but no system software built with it. AH: Done a lot of testing. Should be possible to just use 1 degree as a test to get 2017 and 2019 to agree. There are repro build targets in some of those build files. Could try and find them. RY: Yes please.
AK: Any difference in performance? RY: No big difference. NH: New machine? RY: No, old machine, with broadwell.
RY: NCI recently sent out gadi update and blog and webpage. 48 cores/node. NH: Did we think it was 64 cores/node? AH: Still 150K cores in gadi, with 30K of broadwell+skylake. Maybe have to change some decompositions. RY: Not the same as any existing processors.
AH: Two week overlap with gadi, then short will be read only on gadi. RF: There was panic in ACCESS due to an email that said short would disappear in mid October. AH: Easy to misread those dates.

accessom2 release strategy

AK: Harmonising accessom2 configurations. Somewhat haphazard release strategy, but not tested. Maybe master branch that is known good, and have a dev branch people can try if they want? Any thoughts?
NH: Good way is really time consuming and labor intensive. Would mean testing every new configuration. Not sure if we can do that. Tried to keep master of parent repo only references master of all the control experiments. Not sure if necessary or desirable? Maybe makes more sense to develop freely on own experiment and keep everything in control stable? Not sure. If all control experiments are stable and working, can be a bit slow to update. Just update your experiment.
AK: Some people are cloning directly from experiment repos, some cloning all of access-om2. Would reduce confusion if control directories under accessom2 are kept up to date with latest known good version. NH: Does make sense I guess. Shame for people to clone something that is broken which has already been fixed. There is some python code in utils directory which can update everything. Builds everything at all resolutions, copies to public space, updates all exes in config.yaml and does something with input directories. AK: I ended up writing up something like that myself.
AH: Should split out control dirs from access-om2 repo. Is a support burden to keep them synched. Not all users need entire repository, as using precompiled binaries. Tends to confuse people. NH: Did need a way for config to reference source code and vice versa. AH: Required to “publish” code? Maybe worth looking into. NH: Ideally from the experiment directories need to know what code you’re using. Probably got that covered. In config.yaml do reference the code and it’s in the executable as well. When run executable it prints out the hash from the source code. Enough to link them?
AH: I recall NH wanted to flip it around and have the source code part of the experiment. NH: Probably too confusing for users. AH: True, but a useful idea to help refine a goal and best way to achieve it.
AH: A dev branch is a good idea. Then you have the idea that this is the version that will replace the current master. Can then possibly entrain others into the testing. Users who want updates can test stuff, you can make a PR and detail testing that has been done.
NH: Good idea. Some documentation that says experiments have stable and dev. When people are aware and have a problem, wonder if they can go to dev, see if it fixes. AK: Bug fixes should go into master ASAP. Feature development is not so urgent. A bit gray, as sometimes people need a feature but they can work off dev. AH: Now have some process for this: hot fixes that go straight in. Other branches are dev/feature branches. Maybe always accumulate changes into dev. Any organisation helps.
NH: Re: Removing experiment repositories: namelists depend on source code. AK: Covered by executables defined in config.yaml. NH: Yes ok.

FAFMIP PR

RF: Did it work? It’s got a lot of merges. RF: Just two lines. Did a merge and pushed it to my branches on GitHub. AH: I’ll merge it in. Just wanted to check. AH: Can always make a new master branch that tracks the origin, check that out and pull in code from other branches. RF: Have a lot of other branches. AH: Can get very confusing.

payu restart issue

AH: Issue has resurfaced. I commented on #193, but didn’t look into the source of the problem. Should look into it rather than talk about it here.

FMS subrepo

AH: Still not done the testing on this. Been sick. Will try and get back to it.

Tenth update

AK: Andy done 50 years with RYF 90/91. Running stably. AH: What tilmestep? RF: Think he was using 600s. AK: 3 months / submit. Should ask for longer wall time limit. RF: Depends on how queues will be on new machine, what limits and what performance. AH: Talking about high temporal res output. AK: Putting out 3D daily prognostic fields. Want it for particle tracking. Including vertical velocity. Slowed it down a little bit. RF: More slowdown through ice. AK: No daily outputs from CICE.

CICE PIO

NH: Still in progress. AK: Also requires newer version of netCDF? NH: Requires specific version of netCDF. Needs parallel version. Not a parallel build for every version. AK: Has parallel for 4.6.1. RF: Bug in HDF5 library which it is linked to. Documented in PIO. Probably a bug we’re not going to trip. Doing a collective write, and some of the processors not taking part/writing no data. Fixed next version of HDF5 1.10.4? AH: Not a netCDF version so much as the HDF library it links to. RF: Yes. AH: So should make sure we ask for a version of netCDF that doesn’t have this bug? AK: Add to request.
RY: If want parallel version, use OpenMPI 3 or 4? AH: Good question! RY: All dependencies will be available and very easy to use. AH: This using spack? RY: Above spack and other stuff. Automatic builds with all possible combinations. AH: Using it for your builds? RY: We are requested to test and are now using. Difficult to create new versions currently. In transition difficult, but in new system should be fixed quite easily. AH: Should fix the various versions of OpenMPI with different compilers. RY: Yes. AH: Will have a compiler/OpenMPI toolchain? RY: Will automatically use correct MPI and compiler. AH: Any documentation? RY: Some preliminary, but not released. When gadi is up all this should be available.
AK: Should I ask for a specific version of MPI? RY: If don’t specify, will be built with 3 or 4. Do you gave a preference? AK: No, just want the version with performance and stability we need. Do we need to use the same MPI version across all components. RY: Not necessarily. Good time to try OpenMPI3. No performance benefit as system hardware is still old hardware.

Technical Working Group Meeting, August 2019

Minutes

Date: 14th August, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) RSES ANU, Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Marshall Ward (MW) GFDL
  • Nich Hannah (NH), Double Precision
  • James Munroe (JM), COSIMA

PIO work with CICE

NH: PIO code in CICE not as complete or thorough as netCDF code. Nothing to suggest it won’t work. Relies on NCAR PIO library, and a CESM utility library. Dependencies which are not part of CICE. Built PIO dependency on raijin, ran into CESM dependency. Can either remove dependency or remove code.
NH: Initially thought to use the MOM approach. Tile and collate. Russ’ comments encouraged to try PIO. Will be supported in future and will be supported in CICE6. Nothing working, but will soon test with 1 degree.
RF: Real bottleneck with high freq output. Worth a go. Attempt to put this into FMS by Hartnett. AH: Different to parallel netCDF? NH: PIO is wrapper around parallel netcdf. Written by NCAR to simplify parallel netcdf. Another layer. On GitHub, continuing to be maintained. RY: Wrapper that does work to match computing to IO domain. Not so useful for MOM5 as it has io_layout already.
MW: Harntnett motivated by FE3 (forecast model) rather than ocean. Not sure what project even involved in.
NH: Big test is handling interesting CICE layout, difference between cartesian grid and PE layout. MW: PIO will support explicit decomposition and other approaches.
NH: Parallel netCDF version on raijin only links with OpenMPI3.0. RY: New machine launched soon. OpenMPI 1.* will be dropped. No new software depending on 1. MW: OpenMPI 2 is not good. Should use 3.
NH: Probably have to test this with OpenMPI 3.0 RY: 3.1.3. Switch everything to that. Good test for new machine. AH: Working now? RY: My fault. Used unmatched openMPI library. Everything looks fine. OpenMPI 2/3/4 with Intel 19. All working. 1 deg & 0.25 deg working. Tenth not working. MW: I was able to run tenth with 3.1.2/3.1.3.
MW: One of the intel compilers broke MOM. A compiler bug with types in types.
AH: Should  start an issue for testing RY: Will email MW directly. RY: Not a MOM bug.
MW: Tried MOM-SIS tenth? Good test. RY: From earlier this year do have this working. This is testing for new machine, so ACCESS-OM2.

OMIP date restart protocol

RF: Talked to Griffies. GFDL take ensemble approach. Run for N years using true dates. At finish reset back to start date with correct calendar. Storing new stuff in different directory. End up with 5 sequences of 55 years. All dates are correct. No issues with leap years going wrong. Think this is the best way to go.
AK: Came to conclusion that this was right way to go, mostly due to leap year issue. Problem is, can we get the model to do that, but Maurice and Ryan had issues. Issue with CICE getting the correct date. CICE has a flag “use_restart_dates”. Suggested set this to false, and set the dates in access_restart.nml, but CICE is not picking up dates. Looks like libaccessom2 is not passing them on to CICE. Some confusion about exactly what they have done. Some instructions on Wiki for restarting, from restarting IAF from RYF at tenth, but doesn’t work for other people. NH: I’ll look at it. AK: Will send issue. NH: Didn’t realise it was happening. CICE date handling is not great.
AH: Downside with ensemble, difficult to get metrics across the whole time series. RF: Need extra meta-data added in. Maybe which cycle you’re in. An extra variable which gives the actual number of days since the start of the run. Down with post-processing. Might be able to concatenate files using extra meta-data. AH: Always have issues with missing leap years if it spans a century. But only daily is an issue. AK: Cookbook do something. MC: Pretend it is no leap? JM: Data looking at as time series? AH: Extra metadata, say offset day is a good idea. RF: Add buffer in netCDF file so don’t need copies. mppnccombine can add padding. usually done with nccreate, make sure the header has some space. hbuf?

Strategy for CICE updates for flexibly adding fields

RF: Way CICE drivers work, variables you want are either hard coded, or muck around with pre-processing to compile them in and out. Wondering if anyone looked at doing it on the fly. Using error codes coming back when setting up variables, so have flexible number of variables passed in and out. Would like this to pass total wind speed, to harmonise code. Also Hakase wants it for some BGC stuff. Phytoplankton through to the ice. So specify the variables, work out if they’re there or not.
NH: Would want the exe to handle configuration with different sets of coupling fields. Sometimes include total wind speed, sometimes not. RF: would know complete set, if not there skip it. Currently have to be hard wired in, or make another driver. NH: Way to do it, start with superset in namcouple, and code would exclude certain variables. RF: Maybe if variable not in namcouple, return an error code, but ignore error. NH: Shouldn’t be too hard to do. NH: OASIS does return error codes that could be used. Either abort or return error code. If aborting could change that. AH: Restart fields? NH: Should do behind the scenes.

Paths for JRA55-do forcing files. Some changes to support v1.4

AH: JRA55-do not part of Input4MIPs, part of CMIP6. Have to use the copy that is CMIP6. Encodes all the metadata in filename, consequently doesn’t currently work with YATM. Circumvented by creating symbolic links that worked with YATM. When I did this couldn’t reproduce. Not sure if this is actually an issue with the fields being different or not.
AH: Tried to use testing framework NH developed for this using jenkins. The historical test that tests against known checksums doesn’t seem to actually compare them. Not sure if that is intentional. Would like to use framework, as NH has done a great job with it.
MH: MOM6 has diag_mediator, supports CMOR name alongside internal model name. Porting to MOM5 is a big task, but idea is good and saved them a lot of work. Could create a thin wrapper to translate to CMOR name if that helps. AK: How integrate with YATM? MW: Don’t know. At FMS level, so only help with 1 model (MOM). AK: YATM access the JRA files. So libaccessom2 change. AH: Looked at YATM code. Generates filename form date. Input4MIPS has current year and next year, so would require code changes. Might just be easier to create a file with date->filename mapping? AH: Possible to do. Would need to add a token for year+1. Possible to do. Probably best to do it that way.
AK: Also need code changes with v1.4. Solid and liquid runoff are separate. What to do with solid runoff? Griffies either use iceberg model, or melt them and add them to runoff. Take account latent heat of fusion? Assuming solid runoff is at zero, which could be a problem. Put in a request to download v1.4. Scripts they have should automatically download it, but not. MW: Think GFDL only has v1.3.
MW: Fields go to end of 2017, is 2018 downloaded? Looking in wrong place? Looking in ua8. AK: Should look in qv56. AK: qv56 up to feb 2018. AH: If not automatically downloading, we should ask. What does the OMIP protocol say about end date? AK: JRA55 can find out about 2018. RF: It is specified, but would like latest for ongoing runs.

Testing FMS merge

AH: Putting FMS in as a sub-repo. Just needs testing. If it reproduces checksums for a month we’re sure it is ok? Is that sufficient?
NH: When Marshall upgraded FMS, went through every MOM test. Including 0.25. Can’t recall how strict we were. AH: Testing framework still there? NH: It is there. Because it never gets used, might be rotted a bit. Can give Jenkins URL of PR and it would do it. We should work together to get that working.

New NCI HPC hardware announcement

RY: System by end of the year. 2 phases, install new machine with Cascade Lake nodes. Short period gabi and raijin run simultaneously. After that skylake and broadwell will be merged with new machine and SandyBridge nodes removed. 100 GPU installed. 16 skylake k-80 nodes. PBS pro again. Storage and network infiniband. 200GB/s transfer speed. OS is CentOS 8. AH: Trying to figure out total core count for new machine. Do you know what core count will be? RY: Not clear on exact number. Can check with system guys if they know the exact number. If 32 cores/node, 150+K processors. AH: Will runtimes be extended for new machine. Find 5 hours too low for high core count jobs. Reduces flexibility. RY: Queue time limits are per project. Quite flexible. Contact NCI help. AH: Have asked for time limit changes in past, but usually time limited. RY: Have been asked by other users, not sure about the policy. Good time to ask and get a better policy for the new machine.

Technical Working Group Meeting, July 2019

Minutes

Date: 1tth July, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) RSES ANU, Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Peter Dobrohotoff (PD), CSIRO Aspendale
  • Marshall Ward (MW) GFDL
  • Nich Hannah (NH), Double Precision

Config checking

AH: Made payu configuration checker. Include safety checks for synching scripts in BASH scripts. Interested in checking for bad namelist options. Russ any specific bad ones.
RF: Kpp kbl standard method should be false, red sea fix used in access-cm. For a CM model maybe warn. OM model just not allowed. AK: nprocs and ncpus driver issue?
NH: diag_step checks? To frequent ok for low res, bad for high. RF: For production runs don’t want? But how you diagnose problems. Best way to find how things are going wrong. Maurice’s issue was trivial to spot. AH: Definitely say don’t want debug_this_module turned on. RF: diag_table debug turned on, should be turned off. Creates huge numbers of messages. AK: Setting up new updated configs. Ten configs. Make them more homogenised. Fixing all these things as they go. AH: These things will get changed by mistake. Don’t have enough people to keep checking things. Doesn’t scale. This will allow new users to submit a config that at least passes these checks, also gives others confidence to change config knowing they have produced something that meets some minimum standard, appropriate for public facing production.

Tenth Run

AK: Andy Hogg is running ACCESS-OM2-01 with JRA-55-do RYF90/91, seems to have smaller biases than previous repeat year (84/85). Currently 13 years. When it does die it is CICE CFL problem. Sometimes the same date, subsequent years didn’t occur. Checked dates. Storm goes near tripole. Not currently messing with forcing winds. Did this with 85/85 but this doesn’t seem as bad, so haven’t done it so far. One drawback doesn’t run at dt=600s, but takes 2.55 hours to do 3 months. With dt=600s could do 6 month submits, which would mean less queue wait. Should be straightforward to fix the winds to enable this. AH: Not a priority considering extra SU cost. AK: About 10%. Cost of losing 6 month submit halfway through > 10%. AH: Shame it is the tail wagging the dog. AK: Could ask for 6 hour limit from NCI? AH: Worth trying. Done it before, and have seen others with increased limits. Prefer to do it, but time limited, and just for one project. AK: Hopefully limits will change with new machine. Currently 65KSU/3mths. RF: MOM or CICE bound? AK: Fraction of time MOM is waiting 2-3%. RF: Not greatly MOM bound. Throw a few more processors at MOM to get it to run less than 2.5 hours?
AK: 40 years. IAF not split, just start from climatology. AH: When will IAF start? AK: No plans, not simultaneously run RYF and IAF.

NCI update

AH: Attended an NCI scheme manager meeting. Mostly about new storage scheme for short term storage. Push came from CSIRO to change to scratch model, but some others in CSIRO not happy. PD: Wasn’t aware that was being driven from this end. Maybe further up the food chain.
AH: Change to time-limited scratch, or a tidal model deleting oldest data first. Maybe a split scheme with old style short on one disk, time limited on another, but not a lot of appetite for that.
RY: First stage November. Our group look for HPC application for new machine. Already have ACCESS-OM from Andy Hogg to look into software state. New machine some old library will not be maintained.

OpenMPI3 and ACCESS-OM2

Recently used ACCESS-OM2 with OpenMPI 3.0. Seems to hang? Know this issue? Or avoid 3.0? Some work required to run on new machine. Spend some time on this work.
AH: Marshall any ideas? MW: Have tried 3.0.0 3.0.1, maybe 3.1.1. Earlier ones didn’t work then got fixed. Newest 3.x should work. RY: Tried 3.1.3, MOM keeps hanging until finish of job. Should finish at 40min. Keeps hanging. 1.10.2 works. 3.1.3 hanging.  MW: Sure I got it running. Will make sure they are in repo. RY: Catch up with your personally? MW: where it hung should tell you. RY: Talk later offline.
AH: Definitely need it working on the new machine. MW: No work needed to be done, it just worked. AH: What changes would you have made? MW: Just versions, environment file and flags. Maybe using some of the alltoallw changes, but I don’t think that was a deal-breaker.
AH: What is the minimum version OpenMPI supported on new machine. RY: Under discussion. System guys will decide. Haven to prepare for any. Not sure OpenMPI 1.10 will still be supported. Don’t know. AH: Likely to be OpenMPI 3.x+? RY: New machine with new architecture. Performance enhancements with new architecture. MW: What arch? RY: Now have skylake. Newer than Skylake. MW: Intel architecture, not Ryzen. AVX512 can’t benefit. fma which we already have AH: AVX512 because can’t vectorise enough? MW: Currently vectorising, but bandwidth limited. Ryzen has better bandwidth. RY: Not announced. No idea. AH: At scheme managers meeting it was an Intel chip. Told it was November when they commission new nodes, take equivalent raijin nodes offline. Iron out the bugs, and early next year will turn off the rest of raijin and turn on the rest of the new machine and at that point it will be larger than raijin now, but not a huge increase in compute. AH: Thanks for bringing that up Rui, as we definitely need to keep an eye on this for the new machine.
MW: Apparently used 3.0.3. Maybe a reference point to start with. RY: Start with 3.0.3? MW: Whole space is volatile, some 3.0.* series work some don’t. But start with 3.0.3 and Intel19.
AH: Would be nice to have a spack like build tool so can say for certain what was run. MW: payu build! AH: spack written by a smart guy from TACC, and lots people use it, and they still have a lot of issues. Not an easy problem to solve. MW: Dale was keen on it. AH: When we met with Dale he was thinking to have spack as a tool preconfigured with compiler toolchains that we can build our tools from. RY: Dale is very busy getting new for the new machine.

Splitting off FMS

AH: Been working on Cmake to compile FMS separately from MOM. Been using the FMS fork in mom-ocean repo with your alltoallw changes. MW: Also a branch on the GFDL repo with those changes.
AH: How to organise the FMS fork? Have a branch that tracks GFDL and master contains our local changes? Could have a branch called gfdlmaster, could have our master branch exactly track the GFDL FMS. Any opinions on how to organise this? MW: Don’t want to use GFDL FMS? AH: I want an easy way to update FMS without touching MOM source tree. MW: Want to get FMS out of MOM? AH: Yes. MW: And want to know how to refer to FMS you want to use? AH: FMS we want to use is a fork on mom-ocean. Gives flexibility to add changes when we need to.
MW: Best to have your own FMS fork. GFDL don’t want to support anything but for GFDL, including MOM5. Don’t really want to get involved in supporting other projects. Will be receptive. No harm in using FMS repo straight, but if doing anything with FMS better off maintaining own version and update as see fit. Don’t see compatibility with older models as priority. Planning a big IO rewrite. Wouldn’t be surprised if it starts breaking and not salvageable.
AH: alltoallw we definitely want on our architecture as we’ve had issues in the past? MW: A lot of work, return not what I’d hope. Latest MPI version bigger impact. Are cases with speed up, but such an infrequent operation not such a big deal. AH: Stopped initialisation hangs? MW: Yes, some rare scenarios where they did alltoall with point to points that broke a lot. In OpenMPI 2.0/3.0 and later they changed something, scenario no longer happened. Segfaulted before, now properly checking. Only necessary for 1.10. It is better, as collectives are generally more responsible. May become necessary, assuming 3.0.3 works.
AH: If want alltoallw, would keep a branch with those changes and rebase on to gfdl master. This would be a well documented branch, or branches, and a well documented way of applying those changes when an update is required.
MW: Can CMake build as libfms and link to MOM when you build it. No submodules, rely on Cmake. Does that work? AH: FMS is not suitable to be a loadable module. Get OpenMPI conflicts, best to build at the same time with the same compiler toolchain. There is a new Cmake tool called FetchContent that can grab a repository and it behaves like it is physically in the source tree. Works well, but not great versioning. MW: Isn’t Nic already doing something like this for ACCESS-OM2 to pull in specific versions of son-fortran. AH: Yes, you can specify a library git hash. The only thing stopping it from working is relocating the versioning string stuff Nic did as it is currently sitting in the FMS directory, and that is going to disappear. Needs it’s own directory, maybe ocean_shared? RF: ocean_shared is used for other tracers. MW: should not use that name. AH: Ok, will make a new directory called version. Can recreate the sed script functionality that is currently in the build script in Cmake using template files. Quite a clean solution. I have a cmake branch on the MOM5 repo and FMS fork on mom-ocean, will get them compiling properly and working properly together. There is a way forward.
MW: Alistair is pretty interested, might be a template for MOM6. AH: Angus already did this for MOM6? MW: Angus, is what you did still viable? AG: Haven’t tried recently, don’t know why it wouldn’t work. Replicating mkmf process in CMake. MW: Automake is not good and won’t touch it. AH: Surprised there was no way to build FMS from the FMS repo. Relies on being imported into another project that knows how to build it. Not sure it is great that a project can’t build itself. MW: CMake support not widespread enough? Not available everywhere? AG: Updates frequently, can have features that break old versions. Used in a lot of projects. Surprised if it went away. AH: Cmake can be brilliant, but also terrible, but better than mkmf. MW: mkfmf is doing two jobs, importing stuff and working out dependencies. Does work well for the latter job. Set a high bar. AH: Haven’t done proper comparisons, but Cmake seems to better for dependencies. Can do parallel builds with Cmake you can’t with mkmf. MW: mkmf just generates a makefile, which is already parallel. AH: So does cmake AG: doesn’t seem like a good makefile, don’t know if the dependency tree is deficient. Rebuilds too much even after touching a single file. MW: if CMake intelligently supports mod files then it is fantastic. AG: Has native fortran support. AH: Speed point of view, Cmake is better. Generated correct dependencies so that parallel compilation worked. Couldn’t do that with mkmf. Also had compilation cascade issues. MW: I build 5 exes at once, so it always looks fast to me. AG: MW same makefile gen as mkmf. MW: More readable makefile than automake? AG: Yes. More readable than automake. AH: When the magic works Cmake is great, when it doesn’t it is a pain, but the magic is worth it. Also supports multiple architectures.

Codebase

RF: Aidan can you approve change to FAFMIP. Starting to get conflicts. Ryans changes put it all in conflict. Riccardo has disappeared, but Fabio’s changes so it is all the same bit for bit. AH: Current conflict in ocean_frazil. RF: Because you put Ryan’s changes in. AH: Sorry. Could rebase on Ryan’s changes. Maybe pull in Ryan’s changes. AG: Could check out the branch, make changes and push to the branch. AH: I’ll try doing it directly on GitHub, get back to you about it. RF: Get that done and I can finish up some of the WOMBAT stuff. With the ESM model I also have to make some changes to CICE. A couple of design things with the number of fields that are passed. Hard wired at the moment. A couple of issues there. Have a chat at a later stage. Rather than hard wire fields, flexibility, test error codes, make compatible with namcouple, so can be done on the fly. Also feed into BGC Hakase is putting into CICE. Need to pass BGC fields between the two modules. Rather than having a plethora of drivers, or CPP directives, better ways to do it.
AH: Made that change on GitHub and merged it. Once checks are finished will accept the PR.
MW: Been working on a test with MOM6, where we turn of every diagnostic, fantastic for finding bugs. Found nearly 2 dozen bugs. Don’t actually register the diagnostics with FMS, just spoof the whole thing at the diag_mediator level, which is a wrapper around the diag manager. Interesting if this could be translated to MOM5. Don’t know a natural way to do it, but might be worth some thought at some point. RF: Code you’re putting into MOM6, not the diagnostic manager? MW: Yes. FMS moves too slow, very conservative, don’t have a robust test framework so are worried about putting in changes. There are some hints that maybe this code could be shared with MOM5. Lots more in there than just this. Just raising it as food for thought. AK: Put as an issue? MW: Opposed to those sorts of issues, but you can if you want.
AK: Want to set up new vanilla reference versions of the 1 and 0.25 deg ACCESS-OM2 models. The forcing on those use 2nd order conservative interpolation. There are overshoots for some fields which have to be positive definite. Would like 1st order conservative for some fields. Do they exist? NH: They should be there, we were using 1st order for a long time, and should be in the input directory. Not sure how well they are named. Should say in the filename, have a look and if you can’t find them we can recreate them.

Technical Working Group Meeting, June 2019

Minutes

Date: 19th June, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Russ Fiedler (RF), Matt Chamberlain(MC) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Peter Dobrohotoff (PD), CSIRO Aspendale
  • James Monroe (JM) Memorial University

FAFMIP

RF: FAFMIP into MOM. Riccardo will do his tests. Don’t expect issues. AH: Did Fabio notice problems? RF: Started with ice formation used by ACCESS wasn’t coded up. Did that and then noticed way things were being done didn’t match with what was in the literature. Mismatch between what Griffies did and Riccardo wrote. Now at a stage where that is now consistent. Talking with Trevor McDougall about equation of state. Coded in MOM not totally consistent with what protocol says should be done. All groups do it a little differently. How badly can we violate the freezing condition and still get reasonable results. If you do this incorrectly can fall below freezing and not form frazil. Behaves ok down to -3 degrees. Hopefully won’t get that far. Are other approaches, have to have a think about that. Will stick with what is done currently. AH: Modifications? RF: Look at other mods to see if we can do it more consistently. AK: More consistently without additional tracer? RF: Still need additional tracer, but more consistent, temp and redistributed heat tracers see the same values of frazil. The way Griffies et al constructed get slightly different values. Not completely clean. Can’t get runwaway with one of the tracers. Safe but not right way. Other ways: fix problem with implicit diffusion. Code as it stands is at least consistent with what has been written up. MC: None of these full TEOS-10? RF: Yes TEOS-10. Also had to fix the conversions to potential temperature. MC: Dealing with salinity etc? RF: Simplified version. Need these changes to do FAFMIP correctly.
AH: Any other ramifications? RF: None. All changes only take place in this style of experiment. Everything separate from other experiments. Only issue was prognostic versus pot temp.
AH: Merged independent of the WOMBAT stuff? RF: No. WOMBAT stuff relies on changes on ocean_sbc. Have to rebase. Get FAFMIP in first.

WOMBAT

 RF: Haven’t had a chance to sit with Matear and test it properly. Just a few changes needed from current code. Hopefully pin down Matear. AH: Hakase with WOMBAT  in tenth? RF: Yes. Hakase will test. Currently inputting winds via a file rather than in through coupler. MC: Richard Matear is working directly with Hakase.
RF: Few lines in the coupler that I have to add and a namelist item. In namcouple file need to pass 10m winds. It is in CM2 code, but not in OM2. AH: Can Hakase work with ice BGC stuff in his current setup? Is this slowing him down? RF: No idea.
AH: Few weeks? RF: Have to rebase WOMBAT stuff.

CICE Mushy ice

RF: Code suddenly got changed and altered and no-one knew why? AH: Nick been keeping our codebase up to CICE6. RF: He made other changes that caused problems. That code also moved to CICE5 svn repository. AH: Backporting to CICE5? A lot of assumed logic in those code changes. RF: Have to familiar with POP code makes salinity changes. Doesn’t go through the surface like MOM. The clause where

ktherm=2

“this is done elsewhere”, not true for all models. Nowhere in the code those salt fluxes are being calculated. AK: Proof in runs, results show drift. RF: Looking at it, needs that if clause removed for coupling to MOM. AH: We’re not part of any CICE6 test suite so they can’t spot errors. AK: Elizabeth Hunke said consortium was open, anyone can join. Have a comprehensive testing regimen. Get more involved so they test our use cases? AH: Definitely need more oversight on code changes into CICE. JM: Any testing when code changes added to CICE5? AH: Not currently no. Nic has some scheduled Jenkins tests but not sure on the status of those.

AK: Hit problem as using mushy ice. Wouldn’t see it otherwise. Using to overcome bug in other scheme, but don’t really want to use it. Slow, don’t need. AH: Can we fix it? AK: Iterative solver fails in high res case. Happens in fresh water regions with low ice concentration. Had intended to dig down more. AH: Would struggle to find this bug anyway as we wouldn’t routinely test tenth.
AH: Fixed now. AK: Not sure about any other problems with changing parameter setting. Took a lot of digging. AH: don’t want science changes without reason

Ob runoff

AK: Not sure how important this is. Shows how runoff code can fail. Cut away a lot of the Ob estuary due to small grid cells causing instabilities. Runoff is done on the fly. Find all runoff that is on land, move to  nearest coastline. Then check for high runoff and spread out if over threshold. Some runoff goes to embayment to the west. Changes to the Ob means that is the nearest bit of ocean. GitHub issue
Not sure how important it is. Similar issue with spreading out. Uses kdtree to find neighbouring points. Doesn’t account if there is land between those points. JM: Can tunnel. AK: What could be done to make land impassable.
JM: Resolution on that discussion?
AK: Not sure high enough priority to spend time on. AH: Use connectivity? Like used to find isolated water bodies. Move land runoff to nearest connected wet cell. AK: Depends on runoff being ocean in the first place? AH: Yes. RF: If can get to right place and just smear it out and use neighbouring ocean points. AH: Is all JRA55 runoff currently on a wet cell on the JRA55 grid? AK: Don’t know if it is a wet cell, it is on the coast. AH: Need to look into that.
AH: How important? AK: Not paying close attention Arctic. Correct volume of fresh water, just in slightly wrong location. Already severe liberties at that location.. Points to failure mode of this method. Can cross land.

Splitting FMS and other components

AH: You want to talk about other components as well Russ?
RF: If we start doing things like that to MOM repo. Will that affect anyone else who already has stuff from there? Cause problems if they want to update if we move to different setup?
AK:  Proposal to put FMS codebase into different repo? AH: Yes. AH: Can’t compile without pulling from another repo. RF: Not sure how it would all work. Use submodules? JM: In submodule right now? RF: Not for MOM5.
AH: I proposed to use CMake to create an alternate way of compiling to pull in those libraries from external repos. Could keep the FMS directory in the repo, but at some point the MOM5 code may use features in an updated FMS that are incompatible. However, they can always pull from a previous commit. Could tag a commit as the last one that had FMS included. Marshall did update FMS in the past. Desirable to go this way, to have a tighter coupling with changes in FMS, put in pull requests to main repo for features we want.
AH: Got CMake working for half the builds. Super simple to swap out external library, already compile it separately. Will finish this so people can test as proof of concept.

Langmuir KPP

AK: Progress with ACCESS-CM2. Turned on langmuir param for kpp and improved Antarctic intermediate water. Should we turn it on for OM2? RF: Our coupled runs got improvement in southern ocean. Getting shallow summer mixed layers. Helped deepen them a little bit. Different types of simulations, but work in the right direction.
RF: Not sure if that is an issue mixed layers in southern ocean over summer? If shallow, could be good. AH: Turn on/off or parameter? RF: Just turn on/off. Pretty sure I changed ACCESSOM2 to get wind coming through. Might need change in namcouple. AK: Need wind velocity as well as stress through compiler? RF: Two ways. Both have been enabled. Standard to pass 10m winds as well as stresses. Other way, if don’t pass winds, flag in kpp scheme can derive 10m winds. MOM6 does it that way. Pass through stress and calculated 10m. AH: Would still work without passing winds? RF: If forcing model with stresses and don’t have winds, this is an alternate way. Not being used currently as most models can pass wind.
AK: Might be a good time to compare OM2 and CM2. Perhaps there are beneficial changes from one or the other? Might just be model specific changes?  AH: How would this happen? AK: Maybe a meeting. Sent an email to Dave and Peter. Look at the namelists and input files.

Other updates

PD: Not up too much. Interested in getting models aligned and best outcomes for both. Maybe have a small VC and discuss. A fairly complicated set of outputs, suites etc. Can be difficult navigating this structure. Definitely encourage talking about it.
AH: What is the status of your runs? PD: PI control is up to year 950. A lot of that is pinup. Historical forked around yr 900, and a 4x historical. This is CM2. No carbon cycle. Two submissions, ACCESS-ESM-1.5. Old atmosphere, cice, updated MOM. ACCESS-CM2 is much newer atmosphere, full aerosol scheme, 5-6x slower, but no carbon cycle. ESM is a lot further along. CM2 is  not as advanced. Took some time to reach equilibrium. AH: Happy with results? PD: Yeah, seems pretty  good. Climate sensitivity seems about right. Sensitivity is a lot higher for CMIP6 than CMIP5.
JM: Will attend meetings going forward. To complement some stuff Angus is doing on the cookbook.

Technical Working Group Meeting, May 2019

Minutes

Date: 15th May, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Marshall Ward (MW) GFDL
  • Russ Fiedler (RF), Matt Chamberlain(MC) CSIRO Hobart
  • Nic Hannah  (NH) Double Precision
  • Rui Yang (RY) NCI

Agenda

– Follow up on migrating FMS to an external library
– WOMBAT in harmonised MOM update and testing
– Tenth load balancing
– CICE IO bound in high core counts

CICE IO bound in high core counts

AK: Runs with new CICE executables NH compiled a while ago. Performance slowdown with compression level 5. Tested with level1 few % larger in size, 2500s -> 1800s for IO time. 1300s without compression. Compresses well with low value because a lot of missing data with ice.
NH: Went from netCDF3 to netCDF4. Might be worth trying no compression. AK: have a run with compression level zero. RF: Does impact on walltime. MOM is waiting. Usually have CICE waiting on MOM, but when outputting is the other way. MW: Compressing MOM before, now both? NH: Compressing and daily output an issue. AH: What is the chunking? RF: Uses default. AH: Some libraries chose weird values for time value? RF: No funny business, all sensible. RF: All these point to point gather, maybe not efficient. MW: Do you know where the time taken is? RF: Slowdown, but not sure split between gather and write. NH: Breaking new ground, daily output and running at scale, and unusual tile distribution. Increases the COMMS to gather. So many different new things. MW: On sect robin still? AH: 10% of total runtime.
NH: With MOM do all this with post-processing to get performance of model best as possible. Anything we do slowing model as whole, should post-process. Didn’t think about that option when put change in. If slowing down as a whole, back out change and work out post-processing step. AK: Half the data in daily files is static. Totally unnecessary. Made issue to maybe output static data to a file once. RF: Aggregate daily files to monthly? AK: Slows down output from model. Less compressible? RF: Highly correlated, will compress easily. AH: How much extra wait time? RF: The whole write time. AK: 25 or 18% in MOM runtime. AH: Monthly output issue disappears? RF: Yes. RY: CICE write to single file? RF: Yes through one processor. RY: Can we do it like MOM, each processor writes data to it’s own file. NH: Yes, good idea, but more complicated than MOM. CICE tiles are not located close to each other in space. RF: Could use PIO interface. Not compatible with centrally installed netCDF libraries. Bugs in version of HDF. Need OpenMPI > 1.10.4  and netCDF > 4.6.1. MW: PIO good candidate, RY can help. CICE developers looking into this? Stayed in touch with them? NH: Look at CICE6 GitHub. RF: Looked, but no active development on IO in any fundamental way.
NH: If we did decide to go that way, good opportunity to feed that back to CICE community.
MW: NCAR as a developer of PIO, keen to get it into other models. If CICE is on their radar might get some feedback there. RY: MOM has IO layer a bit like PIO. MW: Not a good idea to use PIO in MOM6.
RY: Tried PIO in MOM and found it was not a good candidate. MW: Yeah, MOM6 was already doing something like that.
RY: Parallel compression will be supported in future in netCDF.
RY: Been experimenting with my own version of library and got some positive results.
End result: take compression out, take out static fields. Post processing. Is anyone using daily fields. RF: We’re interested in daily ice fields. Using data assimilation. MW: Shorter runs though? RF: 20 years.
NH: Instead of writing individual daily files, should write to a single file, static fields won’t be replicated, maybe benefit from some netCDF buffering. AH: Big code change? NH: Not sure. AK: Has a file naming convention for different frequencies. Frequency part of filename. NH: Saying could already output daily into monthly files? AK: No, filename encodes time and frequency. Doesn’t seem to write repeatedly to any of it ’s output files. AH: Define unlimited dimension.
NH: Make a GitHub issue. If high priority could get some time. MW: Make the issue in the CICE repo, inform them what we’re doing. They mentioned an NCAR community board.
AH: Make a namelist option and recompile? Compression level as option?

Tenth load balancing

AK: RF suggested a smaller core count of 799. Doesn’t change wall time which is a win. How low can we go? RF: Worked out a few more configs. Slight change of tile size, 720 would be ok. 36×36 or 40×30.. Running some quick tests with tool under /short/v45/masking. Run and output masks and where tiles get located. Also number of processors/blocks you need. AH: Put code on COSIMA GitHub? RF: Just a quick little thing. AH:  Yes but useful.
AH: Down from 1380. Big win. Total core count? AK: not sure. RF: Total just over 5000. AH: Still running on normalbw? AK: Yes. AH: Wait on normal crazy. RF: Look at skylake? Usually empty. RY: Yes new nodes, not large total core count. AK: Get 6mo/submit without daily outputs. Daily over by 30/45mins with ice. dt=600s.
NH: If no-one else to fix, and no-one else to fix, assign NH to issue.

WOMBAT

RF: Got Matear up to speed. Ran a few tests. One or two bugs yet to be fixed. A couple of fields that weren’t coming through from OASIS properly. Was the ice field, wasn’ t coming through correctly. Got it going with external fields forcing it. Figured out changes to get it running properly with full ACCESS mode. Running some tests cases after bugs fixed. MC: Now running with calculated gas exchange coefficients. RF: The way it was originally written the way fields were ingested into MOM. MC: Using the same wind field in BGC and wind mixing? RF: Yes, all together. MC: Level of the wind? In ACCESS-ESM was getting lowest atmospheric wind. MC: CICE will send a 10m wind through OASIS? RF: Not FMS coupler, this is just OASIS 10m wind. MC: ACCESS-ESM case?
AH: Hakase could be used as a guinea pig. Any of these changes affect ACCESS-CM2? RF: Shouldn’t. AH: Do we need to do any bit repro tests? RF: Shouldn’t change anything.

migrating FMS to an external library

AH: I put my hand up to do the change and test.
MW: FMS updated to Xanadu a couple of weeks ago. AH: So a good time to try it out. MW: Already tried it, put some MOM patches in to fix some issues. AH: On the GFDL FMS repo? MW: They have opted not to take the parallel netCDF using MPI IO patch RY and I worked on. Have set up a branch with parallel IO, and Xanadu has been merged into that branch. May want to use branch with parallel netCDF extensions. Ongoing conversation with this. They may merge it in. Can use what you want. Your call as to what to use.
RF: Any whitespace issues? MW: FMS and MOM6 live on different planets. They don’t interact much. Don’t collaborate with FMS guys.
MW: Alistair getting miffed at the red buttons on the jenkins server. He/I will look at some GFDL independent solution. Happy for NH to be involved as much or as a little as he wants. NH: They should be more blue than red. MW: Happened in March due to checksumming? NH: Bitrot, Jenkins is fragile. Scott often fixes it. Good idea, happy to help in any way. May be easier to set up on raijin. Does one qsub and runs them all under one sub. MW: slurm is sort of designed to do that. NH: slurm is awesome. MW: slurm is better. NH: like it a lot more. MW: Good for running multiple jobs per submission. Blurs the line between MPI and scheduler. Some sort of meta-scheduling. Place jobs on ranks within the request. AH: More flexibility.

Actions

  • Update MOM build to use external FMS library (CMake) – AH
  • Finish WOMBAT integration – RF
  • Make CICE compression issues – AK