Technical Working Group Meeting, April 2020

Minutes

Date: 29th April, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Andrew Kiss (AK) COSIMA ANU, Angus Gibson (AG) ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

Apologies from Peter Dobrohotoff.

JRA55-do v1.4 support

AK: Staged rollout. NH tagged some branches, so existing master tagged 1.3.0, using old JRA55-do v 1.3.1 using NH new exes which also support 1.4
AK: Also working on a new feature branch for 1.4. Same exes configured to use JRA 1.4 version. Seems to run ok. Not looked at output. Will look at that today. Once satisfied that is ok will move into master, tag 1.4.
AK: Also looking at ak-dev branch with a wide variety of changes. Once this is ok will tag with a new ACCESS-OM2 version. Will be new standard for new experiments. Good to make an equivalent point across repos.
AH: COSIMA cookbook hackathon showed value of project boards. Might be a good idea next time something like this attempted. AK: Tried, but it didn’t go anywhere.
NH: Two freshwater fields coming from forcing, liquid and solid. Both go into the ICE model which accepts one new forcing field. Get added together, solid magically becomes liquid without heat changes, passed straight to Ocean. Ocean and Ice models have also been changed to accept liquid part of land/ice melt and heat part of land ice melt. Exist but just pass zeroes. Extra engineering not being used as yet. A harmonisation step which takes us close to CM2 as coupled model uses these fields.
RF: With my WOMBAT updates incorporated this new code, could get rid off ACCESS-CM preprocessor directives.
NH: In the future can put work into calculating those fields correctly in the ice model. Not a huge amount of work. Will then have river runoff, land ice runoff and land ice melt heat.
NH: New executables have another change, support different numbers of coupling fields. Land/ice coupling fields are optional. At runtime figures out what coupling fields used. Dependent on namcouple being consistent. Coded internally as a maximum set of coupling fields. You can take coupling fields out but not add new ones. Possibly useful for others. Not a fully flexible coupling framework.
NH: Working on ak-dev branch. Harmonising namcouple files. Have a lot of configuration fields, but a lot ignored. Could use same namcouple in all configs, but in practice might leave them looking a little different. They include the timestep in them, but ignored. Could set to zero? AH: Or a flag value that is obviously ignored?
NH: Only three variables used in namcouple. Rest ignored, bust must parse properly. Needs cruft to make it parse. Never liked namcouple. Completely inflexible, values must be changed in multiple places.
AK: On version Oasis3-mct2, have they improved it in new version?
NH: Can now bunch fields together, pass a single 3d field instead of many 2d fields. Should improve performance. RF: Not through namcouple at all. Just a function call.
MW: What does OASIS do now? NH: Just doing routing. Which is done by MCT anyway. Remapping done by ESMF. Coupler meant to do 3 things, config, remap and routing. Made libaccessom2 do as much as possible automatically. So OASIS does very little. Still using API, so would require effort to remove.
MW: Know about NUOPC? NCAR is using it. NH: Coupling API. If all use the same API then can go plug and play. MW: MOM6 has a NUOPC driver. NH: In the future would to look at OASIS4, but probably just chuck OASIS, use MCT to do the routing and ESMF to do remapping. MW: NCAR dropped MCT. NH: MCT is a small team. AK: Something that would suit ACCESS-CM. Any critical things that rely on OASIS? MW: At mercy of UM. Probably still use OASIS due to Europe. NH: Not using ESMF, so using OASIS a lot more than we are. Might never change because of that. AK: Even moving to v4 would require coordination with CM2. NH: Nicer and cleaner, but no clear benefit.

Updated ACCESS-OM2 model configs

AK: 3 different tags. 1.3.1, 1.4 in works. ak-dev new tag. 1.4 intended to be minimal other than change in JRA55-do version. ak-dev making more extensive changes. Ussing mppccombine-fast for tenth. Output compressed data and use fast collation. Not worthwhile for 1 deg. With 0.25 output uncompressed and use mppnccombine to do compression. Hopefully output will be a reasonable size.
AK: If outputting uncompressed restarts might get large. Might want to collate restarts. Wanted to verify which run is collated: just finished, or the previous run? AH: Yes it is the restarts which are not used in the next run.
AH: Because quarter degree is not compressed won’t get the inconsistent chunk sizes between different sized tiles. Ryan had the problem when he had a io_layout with very small chunk sizes which made his performance very bad. mppnccombine-fast might be faster, will definitely use less memory. Still got compression overhead but memory use much reduced. AK: Not such a big issue as tenth. AH: Paul Spence had some issues with the time to collate his outputs. Maybe because they were compressed. Would recommend using AK: Fast version will always be faster? AH: Yes, at least no slower, but definitely uses less memory and will be much faster with compressed output.
MW: No appetite for FMS with parallel-IO? AH: Compression? Without it won’t bother probably. RY: Did some tests on parallel IO compression tests. Can’t recall results. Interested to try again. Requires a bit more memory. gadi has optane as storage or as memory. Interesting to test. Probably can use that for parallel compression or even just serial compression. Thinking about, but haven’t started. AH: Please keep us updated.
NH: Anyone have thoughts on CICE? Planning on parallel IO on CICE. Are we going to need a compression step? RF: With daily would like compression. Post processing to do compression on smaller number of PEs would be fine. Improving IO is critical for Paul Sandrey and Pavel. NH: Might need a post processing step similar to MOM. RF: Yes. Getting parallel IO is the most important. Worry about compression later. NH: Did a run yesterday with parallel IO. Completed successfully. Output was garbage. Was expecting to do heaps of work and segfaults. Surprised at that. RF: Misaligned or complete garbage? NH: Default assumption as bad as can be. Just used parallel-IO output driver on CICE. AK and RF realised daily CICE output was a bottle neck on 0.1 performance. As model code existed, decided to get working. RY: Parallel IO need to set up mapping correctly between compute and IO domains. NH: Should be part of the current implementation. Mapping is a tricky part of CICE. AK: Values out of range, so maybe not just a mapping issue? NH: Completely broken, but not segfaulting. Just getting it building was one hurdle. Also had to call the right initialisation stuff within CICE. Had to rewrite some of it that was depending on another library from one of the NCAR models (CESM). CICE is used with  CESM and they had a dependency on another utility library. Changed some code to remove dependence. Relatively positive. Library under active development and well supported. AH: Did they develop just for their use case, and maybe doesn’t support round-robin? NH: Not sure. We do know never been used in any other model than CESM.
MW: Ed Hartnett (PIO) eager to get into FMS. Also lead maintainer of netCDF4.

Status of WOMBAT in ACCESS-OM2

RF: Compiled. Next is testing. Up to current ACCESS-OM2 code changes. Had issues with submodules. AK: Previously libaccessom2 dependencies brought in through CMake, now moved to submodules. If you have an existing repo will have initialise submodules to pull in latest from GitHub.
RF: Made some changes to installation procedures. Can go between BGC version or standard ACCESS-OM. Want it to be different for BGC version. Changes to install scripts and hashexe etc. AH: Good that it is up to date, could have been an messy merge otherwise. RF: Will run tests today or tomorrow.

MOM5 PR from GEOS-ESM

AH: See this PR? Seemed a bit odd to me. First idea was to ask them to split the PR into science changes and config changes. RF: Looked like a lot of it was config changes. MW: Adding the GEOS5 stuff, which they shouldn’t. Code changes are challenging. Introduced a generic tracer, not sure what they’re doing with it. AH: Strategy? Ask them to wrap science stuff in preprocessor flags? MW: First step is to get config stuff out. Asked GFDL about it. GEOS are switching from MOM5 to MOM6. This must be associated with that effort to validate their runs. Maybe just giving back what it took to get it work. Maybe just makes his build process easier. AH: They have a specific requirement to use the same FMS library. Seems odd, as MOM5 and MOM6 are not likely to share FMS versions in the future. MW: Thorny topic, as it is not clear how FMS compatible MOM6 will be in the future. AH: Using FMS for less and less. MW: The PR needs to be cleaned up. AH: Also put in a CMake build system. MW: They need to explain more.
AK: Has conflicts, so can’t be merged at the moment. AH: Only going to get more conflicted. Which is why I was thinking they could split it up. I have a CMake build system in another branch, but never finished. if we can use theirs cool. I’ll engage with them.

Miscellaneous

AH: Been experimenting with graceful error recovery with payu. Can specify a script which can decide if the error is something you can just resubmit after. Mostly of interest to the production guys.
PL: Scalability testing with land masks, manifests, and payu setup. Supposed to be simpler but taking some time to get used to it. AH: Manifests are relatively new so some of the use cases have not been as well tested. MW: Are not all using manifests? AH: They are, but can be used in different ways. Tracking always works, but options to reproduce inputs and runs. Suggested PL could use reproduce to start a run. It was confounded by some restarts being missing, so not quite sure if it works as we would like. This is a very desirable feature, as it makes it very simple to fork off new runs from existing ones as well as making sure the files are consistent. PL: Working now. Next step is to change core counts and look for scalability numbers. AH: When I was doing scalability stuff for MOM-SIS I use input directory categories to isolate processor changes. Not quite doing that same thing anymore, but you can do something similar, but you won’t want use the reproduce flag if you are changing any of the input files.
AK: Just MOM scaling or CICE as well? PL: Just looking at MOM to begin with to see dependency and wait times. AK: CICE run time is critically dependent on daily outputs. Revelance to scaling data to production output. MW: Make sure your clock can tell them apart. In principle can distinguish compute from IO. AH: Daily output always part of production? AK: Ice modellers want very high temporal output. Ice is very dynamic. Even daily output not enough to resolve  some features. Maybe wait for PIO for CICE scaling tests? AH: I thought scaling tests always turned off IO? Can’t properly test scaling with daily output, as it dominates runtime.
NH: Would be nice to look at performance with and without PIO. PL: Will also look at CICE. Start with ocean model. AK: Were you (MW) running models coupled for paper scaling numbers? MW: Coupled. Not sure what IO was set to. Subtracted it and don’t recall it was large. Don’t recall a bottle neck, so might have had it turned off. RF: Wouldn’t be running with daily IO. Monthly IO doesn’t show up. MW: sounds likely.
AK: For IAF had a lot of daily CICE output. Not complete set of fields.
MW: Starting to run performance tests at GFDL and want to use payu. Has it changed much? Manifest stuff hasn’t made a big difference? Will have to get slurm working. Filesystem will be a nightmare. You moved PBS stuff into a component? AH: No, you did that. Not huge differences. Will be great to have slurm support.

Technical Working Group Meeting, March 2020

Minutes

Date: 18th March, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

Scalability of ACCESS-OM2 on gadi

(Paul’s report is attached at the end)

PL: Looking at scaling. Started with ACCESS-OM2, but went to testing MOM5 directly with MOM5-SIS. Using POM25, global 0.25 model with NYF forcing. The model MW developed for testing scaling prior to ACCESS-OM2. Had to add specify min_thickness in ocean_topog_nml.

PL: Tested the scaling of 960/1920/3840/7680/15360, with no masking. Scales well up to some point between 7680 and 15360.

PL: Tested effect of vectorising options (AVX2/AVX512/AVX512-REPRO). Found no difference in runtime with 15360 cores. MW: Probably communication bound at the CPU count. Repro did not change time.

MW: Never seen significant speed up from vectorisation. Typically only a few percent improvement. Code is RAM bound, so cannot provide enough data to make use of vectorisation. Still worth working toward a point where we can take advantage of vectoeisatio.

PL: Had one “slow” run outlier out of 20 runs. Ran 20% slower. Ran on different nodes to other jobs, not sure if that is significant. MW: IO can cause that. AH: Andy Hogg also had some slow jobs due to a bad node. AK: Job was 20x slower. Also RYF runs become consistently slower a few weeks ago. MW: OpenMPI can prepend timestamps in front of output, can help to identify issues.

PL: Getting some segfaults in ompi_request_wait_completion, caused by pmpi_wait and pmpi_bcast. Both called from the coupler. NH: Could be a bad bit of memory in the buffer, and if it tries to copy it can segfault. PL: Thinking to run again using valgrind, but would require compiling own version of valgrind wrapper for OpenMPI 4.0.2. Would be easier to Intel MPI, but no-one else has use this. Saw some cases similar when searching which were associated with UCX, but sufficiently different to not be sure. These issues are with highest core count. MW: Often see a lot of problems at high core counts. NH: Finding bugs can be a never ending bug. Use time wisely to fix bugs that affect people. MW: Quarter degree at 15K cores would have very small tile sizes. Could be the source of the issue. AH: This is not a configuration that we would use, so it is not worth spending time chasing bugs.

PL: Next testing target is 0.1 degree, but not sure which configuration and forcing data to use. Will not use MOM5-SIS, but will use ACCESS-OM2 for direct comparison purposes. AK: Configurations used in the model description paper have not been ported to gadi. Moving on to a new iteration. Andy Hogg is running a configuration that is quite similar, but moving to new configurations with updated software and forcing. Those are not quite ready.

PL: Need a starting configuration for testing. Want to confine to scalability testing and compiler flags. NH: ACCESS-OM2 is setup to be well balanced for particular configurations. Can’t just double CPUs on all models as load imbalance between submodels will dominate any other performance changes. Makes it a problematic config for clean configurations for things like compiler flags. MW: Useful approach was to check scalability of sub-model components independently. Required careful definition of timers to strategically ignore coupling time. MOM was easy, CICE was more difficult, but work with Nic’s timers helped a lot. Try to time the bits of code that are doing computation and separate from code that waits on other parts. Coupled model is a real challenge to test. Figure out what timers we used and trust those. Can reverse engineer from my old scripts.

PL: Should do MOM-SIS scalability work? MW: Easier task, and some lessons can be learned, but runtime will not match between MOM-SIS and ACCESS-OM2. Would be more of a practice run. PL: Maybe getting out of scope. Would need 0.1 MOM-SIS config. RY: Yes we have that one. If PL wanted to run ACCESS-OM2-01 is there a configuration available? AK: Andy Hogg’s currently running configuration would work. PL: Next quarter need to free up time to do other things.

MW: Might be valuable to get some score-p or similar numbers on current production model. Useful to have a record of those timings to share. Scaling test might be too much, but a profile/timing test is more tractable. RY: Any issues with score-p? Overhead? MW: Typical, 10-20%, so skews numbers but get in-depth view. Can do it one sub-model at a time. Had to hack a lot scripts, and get NH to rewrite some code to get it to work. score-p is always done at compile time. Doesn’t affect payu. Try building MOM-SIS with score-p, then try MOM within ACCESS-OM2. Then move on to CICE and maybe libaccessom2. PL: Build script does include some score-p hooks. MW: Even without score-p MOM has very good internal timers. Not getting per-rank times. score-p is great for measuring load imbalance. AH: payu has a repeat option, which repeats the same time, which removes variability due to forcing. Need to think about what time you want to repeat as far as season. AK: CICE has idealised initial ice, evolves rapidly. MW: My earlier profile runs had no ice, which affects performance. MW: Not sure it is huge, maybe 10-20%, but not huge.

MW: Overall surprised at lack of any speed up with vectorisation, and lack of slow-down with repro. PL: Will verify those numbers with 960 core config.

AH: Surprised how well it scaled. Did it scale that well on raijin? MW: The performance scaling elbow did show up lower. AH: 3x more processors per node has an effect? MW: Yes, big part of it. AH: 0.1 scaled well on raijin, so should scale better on gadi. 1/30th should scale well. Only bottleneck will be if the library can handle that many ranks.

NH: If repro flags don’t change performance that is interesting. Seem to regularly have a “what trade off does repro flags have?”, would be good to avoid. MW: Probably best to have an automated pipeline calculating these numbers. NH: People have an issue with fp0 flag. MW: Shouldn’t affect performance. NH: Make sure fp0 is in there. MW: Agree 100%.

ACCESS-OM2 update

AH: Do we have a gadi compatible master branch on gadi? AK: No, not currently. NH: At a previous TWG meeting I self-assigned getting master gadi compatible. Merged all gadi-transition branches and tested, seemed to be working ok. Subsequent meeting AK said there were other changes required, so stopped at that point. gadi-transition branches still exist, but much has already been merged and tested on a couple of configurations. Have since moved to working on other things.

NH: Close if AK has all the things he wants into gadi-transition branch. Previous merge didn’t include all the things AK wanted in there. Happy to spend more time on that after finishing JRA55 v1.4 stuff.

JRA55-do v1.4 update

NH: Made code changes in all the models, but have not checked existing experiments are unchanged with modified code.

NH: v1.4 has a new coupling field, ice calving. Passing this through to CICE as a separate field. In CICE split into two fields, liquid water flux and a heat flux. MOM in ACCESS-CM2 already handles both these fields. Just had to change preprocessor flags to make it work for ACCESS-OM2 as well.

NH: Lots of options. Possible to join liquid and solid ice at atmosphere and becomes the same as we have now. Can join in CICE and have a water flux but not a heat flux.

Strange MOM6 error

AH: A quick update with Navid’s error. Made a little mpi4python script to run before payu to check status of nodes, and all but root node had a stale version of the work directory. Like it hadn’t been archived. Link to executable was gone, but everything else was there. Reported to NCI, Ben Menadue does not know why this is happening. Also tried a delay option between runs and this helped somewhat, but also had some strange comms errors trying to connect to exec nodes. Will next try turning off all input/output can find in case it is a file lock error. Have been told Lustre cannot be in this state.

MW: In old driver do a lot of moving directories from work to archive, and then relabelling. Is it still moving directories around to archive them? Maybe replace with hard copy of directory to archive. MOM6 driver is the MOM5 driver, so maybe all old drivers are doing this. Definitely worth understanding, but a quick fix to copy rather than move.

NH: Filesystem and symbolic links might be an issue MW: Maybe symbolic links are an issue on these mounted filesystems. AH: There was a suggestion it might be because it was running on home which is NFS mounted, but that wasn’t the problem. MW: Often with raijin you just got the same nodes back when you resubmit, so maybe some sort of smart caching.

 

Scalability of ACCESS-OM2 on Gadi – Paul Leopardi 18 March 2020

 

 

Technical Working Group Meeting, November 2019

Minutes

Date: 27th November, 2019
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) ANU,  Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

ACCESS-OM2 on gadi

PL: Submodules not updated (#176). Reported bug from CICE5 but not being built. AK: not sure how to release this. Sometimes model components updated but not tested. AH: gadi transition branch? AK: Yes. PL: Science bug.
PL: To test had to copy files around. Needed to update config.yaml and atmosphere.json. Made fork of 1deg_JRA55_RYF for testing. Had to move to non-public places as don’t have access to public places. Will send details in an email.
PL: conda/analysis3-unstable needs to be updated, payu not working on gadi. AH: Did update, still not working. Update only tested on interactive job. PBS job strips out environment. Wanted to consult with Marshall about why payu works as it does currently. Difficult to debug as payu-run as it does not have the same environment as “payu run”. PL: Work-around to add -V option to qsub_flags in config.yaml. AH: This is what I am considering to change payu to by default. Not sure. Currently looking into this.
PL: nccmp module not on gadi. Been using for reproducibility testing. In backlog. RY: Can install personally, don’t have to wait for system install.
PL: Running on gadi. Got 1 deg RYF55 finished. Did not have mppnccombine compiled. Will have to do this to get this working correctly. Got something for baseline for comparison. Report by the end of the week.
RY: gadi 48 cores. Default based on broadwell (28 cores). Do you have an up to date config? Paul currently changes core count in his config, but is it done in official config?
AH: I was in the process of making an official configuration for gadi. Copied all inputs that were in /short/public to the ik11 project. Once directory structure finalised will make a config that runs, update on GitHub, and look at making the same changes for other configs. Make an exemplar config with those changes. RY: Should work on same configs.
RY: Anyone else running on gadi? AH: No.
AH: What are the impediments to others updating ACCESS-OM2 on GitHub? People not sure if they can? How they should go about tit? AK: Put my hand up to do this. Other model components also need updating. AH: Maybe dev branch that everyone pulls from. Easier to make changes without worrying about breaking. So everyone working from the same version and don’t have to re-fix known bugs.
AH: Environment stuff? MW: Something about python exec command. Nuance? Wholesale copy everything? Wanted to create idealised processes, rather  than depend on what users haves stop. payu run submits job to PBS with whole new environment. Explicitly give environment variables.
AH: Drawback payu-run does not use same environment as payu run. MW: Not launching a process. payu run submits to PBS and starts posix process with defined environment. Exception when explicitly give it environment variables. AH: One work-around is to make list of environment variables want to keep. Losing MODULEPATH variables. PL: module env being used by payu required modules 3. Modules 4 works differently. Python code from modules 4 may work better.
MW: Fixed? AH: Thought I had, but was fooled because using payu-run. MW: If you set MODULEPATH locally, it won’t be exported to payu run process.
PL: What is the fix? MW: On raijin there was a bootstrap script in init dir, which sets everything. I duplicated those commands and put them in the payu module that did equivalent bootstrap. If moving to gadi and it is different none of that bootstrap script works. PL: Bootstrap script there, but completely different. MW: Was old version, and never actually used the bootstrap script. Maybe exec the bootstrap script they provide? AH: Or pass through environment variables that are set already. MW: Do whatever you think is best. Did try and make it so ‘payu run’ job was clean and always looked the same regardless of who submits. If we take entire ENV and submit to run, every run will be different. One variable is a controlled solution. Solution should be possible to have job on submitted node can set it up on it’s own. Should get it going and not be held up by my purist notions. AH: Try/except blocks can be used to support multiple approaches. MW: Definitely need to bootstrap the modules. PL: Sent through email with details.

OpenMPI/4.0.1 on gadi

AH: Angus reported openmpi/4.0.1 seems broken. Has this been fixed?
AG: Any wrapped commands (mpicc, mpifort) will print whitespace before output. In most cases ok, but can break configure scripts. Ben M knows about it, but not why.
PL: Divide by zero error in MPI_Init. MW: Remember that one UCX back-end, FP exception. Evaluates a log function when evaluating binary tree when working out communication. Ben M told them about it, but got nothing back. We use FP exception checking, but can’t ignore for just MPI. PL: Work-around like turn off UCX? MW: Could turn off FP exceptions. A race condition, so not every job sees it. RY: Can turn off UCX. Can use ob1 instead of UCX. Also try that. PL: Wasn’t sure it would work on gadi.
AH: Maybe 4.0.1 not a good candidate for testing? Get intermittent crashes.

Russ update on model performance on gadi

RF: Been testing OFAM bluelink, compiled as MOM-SIS without doing ice. Performance was fantastic. 2x faster than Sandy Bridge. Don’t get hammered with extra cost on new CPUs. Initialisation was very fast. A lot of files, so might be a low load issue. Dropped from 100s to 8s. Doing data assimilation runs, run 3 days at a time. 25% of the run time was init. Now pretty much zero. MOM5 performance was really good.
RF: Did notice some variation on start up of CM4. Still a lot faster. Reads in a lot more files and a lot more data. Still considerably faster than on raijin. MW: MOM has IO timers, do you have those on? FMS timers. Rui used them a lot. RF: No, didn’t turn them on.
RF: Running CM4 was about 15% faster than Broadwell. Improved but will cost a lot more for decadal prediction. RY: 15% is normal. Martin report UM is 30% quicker. RF: SIS2 load balance is bad. Probably a bunch of things being covered up. Needs more testing.
MW: Bob has never talked about SIS2 load imbalance. Presumably oblivious to them. RF: Would have to be. Regular layout would lead to many redundant processors. MW: Alistair has done some iceberg code load balance improvements. RF: Doesn’t take much time. Had to turn off iceberg stuff on raijin. netcdf stuff broke it. Might turn back on. Time spent in iceberg code minimal.

Stack array errors and heap array option

RF: When compiling need to set heap-arrays option in compiler, otherwise get segfaults with stack, even when stack set to unlimited. Wasn’t an issue on raijin. Happened for both MOM5 and CM4. PL: Dale mentioned about stack size limited to 8MB. RF: I unlimited stack size, so shouldn’t have been an issue. Got all sorts of issues with unmapped addresses. First one saw it was automatic so tried moving to allocatable, moved error. Then tried different heap-arrays size options, which moved error again. MOM5 dropped to heap-arrays 5KB. Same for CM4 but set to zero for SIS2 and it got through. Different models, seems ubiquitous. MW: Intel fortran?
MW: When compile and run on CRAY machines stack vars use malloc, so heap variables not stack. Same model, same compiler on laptop (gcc), same variables are stack variables. Is it possible moving from raijin to gadi something different about malloc. RY: CentOS 7 v 8 makes some difference. MW: Is kernel making some decisions on malloc? RY: Had similar issues with UM. Stacksize unlimited seemed to fix for UM. But Dale talked about this in ACCESS meeting, kernel changed something that caused this problem.
NH: Intel compiler has heap always arrays option. Useful in some cases. Models can have array bounds overruns, and easier to track when trash heap compared to stack. RY: Slower? NH: Depends. Doesn’t do it for everything, just the larger arrays. RF: If you just set heap-arrays, all on heap. Can control it. MW: In MOM6 explicit places we declare variables we know we won’t use, contingent on assumption they are stack vars. Can’t make those assumptions any longer.
NH: Surprised to hear linux kernel. Would think it was Fortran runtime or compiler. MW: runtime or libc. Couldn’t figure out why different results with same compiler on different platforms. NH: Calculating variables addresses, compiler computes stack offsets. Looking at the executable there are static offsets. Needs to be done at compile time. MW: Shouldn’t be running models that need to use heap. Should be resilient to either choice. No? NH: Comes down to algorithms used to manage memory. Heap has algorithm to minimise fragmentation. Don’t have an answer, will need to think about it.
MW: Can you send a bug report for SIS2? RF: Could be everywhere that has run out of stack space. Just the first one I tried to fix this.
AH: What OS are you running on your laptop? MW: Archlinux. Comparing them to the travis VMs. AH: At some point the compiler has to query the system to see what resources are available? MW: The fact that you’re typing stacksize unlimited shows you accessing the kernel. AH: Seems strange, system has plenty of memory. MW: I’m interested in this problem. AH: Problem should be reported to relevant NCI people (Dale/Ben?). Potentially affecting a lot of codes. Not tenable that everyone who has this issue have to debug it themselves. MW: Bad memory explicit in stack, buried in the heap? NH: Can make a huge difference. Layout of memory is different. More likely something on HEAP won’t affect other variables. More fragmented on stack. Heap memory more tightly packed. MW: Fixed a couple of dozen memory access bugs in MOM6 and they take it seriously. RF: Old versions I’m using with CM4 release. Happens with MOM5. Only FMS common. MW: Wondering if this is a bug that is hidden moving from stack to heap.
MW: Using GCC9.0 to find these. Few flags to find stuff. Initialise with NaNs. malloc-perturb is an environment variables you can turn on and that helps. Turns on signal NaNs. Any FP op generates an error now. Finds a lot of zeroes in bad memory accesses that didn’t trigger errors. Trying to not use valgrind, but that would work also.
RF: Switch in GCC that does something similar to valgrind. Puts in guards around arrays. MW: Don’t know the explicit option, using -Wall, turns it on for me. GCC9.0 is very aggressive at finding issues in a way that 5/6/7 were not.
AH: Same compiler on raijin and gadi, see if gadi only issue. RF: Not sure if it was the same version of 2019 I was using. AG: One overlapping compiler 2019.3. RF: Recently recompiled MOM-SIS build. Will look and see if it is the same. AH: Useful data point if same issue is gadi specific.

Update on BGC

AH: Andy Hogg has asked for an update. People at Melbourne would like to us eit. RF: On my desk with Hakase. Been promising. Will prioritise. Almost there for a while. Been distracted with gadi. On to-do list.
MC: Do we know who in Melbourne wants to use it? AH: A student, not sure who.

New projects to support COSIMA and ACCESS-OM2 on gadi

AH: /g/data/ik11 is where inputs that were on /short/public will now live. Not sure exactly how this will be organised. Will mostly likely have input and output directories. Might be some pre-published COSIMA datasets there. Part of a publishing pipeline. AK: Moving data from scratch to this as a holding area? AH: People were using datasets from hh5 that had no status, not sure how to reference them.
AK: Control directories are separate, and not well connected to the data on hh5. Nice to have ways to link things more firmly. AH: To-do for payu is have experiment tracking IDs. Generate UUIDs as unique identifiers for experiments. Will go in metadata file. Not linked to git hash. If they don’t exist, make new ones. AK: Have data on hh5 and the control directories have been moved or deleted. Lose the git history of the runs that were used to generate the output. AH: Nothing to stop that all being in the same directory. Nic has advocated this for some time. Could change the way we do things. AK: Not sure on solution, but flagging as an issue.
AH: Published dataset from the COSIMA paper is almost ready. New location for COSIMA published data will be cj50. To do this publishing have created a python/xarray tool to create published dataset from raw model data. Splits data into separate files for each variable, a year per file in most cases. Needs a specific naming convention for THREDDS publishing. Using xarray  it doesn’t matter what the temporal range of each model output file. Uses pandas style resampling to generate outputs. In theory simple, in practice there are many many exceptions and specific tweaks to be standards compliant. Same tool can handle MOM and CICE outputs, which are different models, and radically different file metadata and layout. If you have something that you might find it useful for it is called splitvar. Also made a tool called addmeta for adding metadata. Do the metadata modification as a separate step as it is always fiddly. Uses yaml formatted files to define metadata. The metadata for the COSIMA data publishing is available.
PL: Published data is netCDF format with all the correct metadata? AH: MOM doesn’t put much metadata in the files. To make this better connection between runs and outputs is to insert the experiment tracking id mentioned above into the files. Would be nice to put that into a namelist so that MOM could put it in the file. Best option, and if anyone knows how would like to know. Another option is a post-processing step, on all the tiled outputs. MOM isn’t the only model we run. Not all output netCDF. Would be nice if there was a consistent way for payu to do this. COSIMA published data should be up before the end of the year.
PL: Will ik11 replace hh5 and v45. AH: hh5 is storage space that is part of a ARC LIEF grant from the Australian climate community. The COE CMS team was tasked with managing this, and people could ask for temporary storage allocations. In practice it is harder to get people to remove their data. COSIMA was one of the first to ask for an allocation, but it somewhat outgrown the original intent of hh5, as it has been there for a long time and grown quite large. hh5 might still be used for some models outputs. Not sure. ik11 started because we needed somewhere to put common model inputs/exes because /short/public went away and /scratch/public is ephemeral. /scratch space is difficult to utilise because of the ephemeral nature. NH: Have some experienced /scratch space on Pawsey. Once you lose data you make sure you have a better system to make sure your data is backed up. Possibly a good thing. AH: Doesn’t suit the workflow people currently use, where they come back and run some more of a model after a break. Suits workflows that create large amounts of data and then do a massive reduction and only save the reduced dataset. Maybe suits ensemble guys. Our models everything we create we want to keep. NH: Doesn’t all the model output go to scratch. AH: Yes, but model output doesn’t get reduced, so end up having to mirror the data.

Technical Working Group Meeting, September 2019

Minutes

Date: 11th September, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX ANU,  Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Nic Hannah (NH), Double Precision

libaccessom2

AK: JRA55 v1.4 splits runoff into liquid and solid. Most elegant way to support? Have a flag in accessom2 namelist to enable combining these runoffs. NH: Is it a problem in terms of physics? Have to melt it? AK: Had previously ignored this anyway, so ok to continue. NH: Backward compatibility!
AK: Some interest in multiplicative scaling and additive perturbations to allow for model perturbation runs. NH: Look at existing code. Might not be too hard. AK: Test framework for libaccessom2? NH: When did scaling did longer to write test than make the code change. All there, could use as an example. Worth to run tests, don’t want to get it wrong. AK: Not familiar with pytest. NH: In this case just copying scaling test, modify, and get pytest to run just that test. Once got just  that test running and passing you’re done.
AK: New JRA55 now in Input4MIPS. Used JRA v1.3 from that directory and didn’t reproduce. AH: Correct. Didn’t work out why it wasn’t reproducing. AK: Ingesting the wrong files? Should be identical. AH: Never figured out what was wrong. Didn’t match checksums from historical runs. Next step was to regenerate those checksums to make sure the historical ones were correct. Could have been ok, but didn’t get that far.
AH: JRA55-do is now on the automatic download list, should be kept up to date by NCI. If it isn’t let us know.
NH: Liquid and frozen runoff backwards compat, but what about future? AK: Some desire to perturb solid and liquid separately, and/or distribute solid runoff. NH: Can we just put it somewhere and allow model to deal with it. AK: In terms of distributing it, not sure. Some people are waiting on this for CMIP6 OMIP run. Leave open for the future. NH: MOM5 doesn’t have icebergs? AK: No. Depoorter et al. has written a paper for meltwater distribution. Maybe use a map to distribute. RF: What they use for ACCESS-CM2. Read in from a file.
AK: Naming convention for JRA55 v1.4 has year+1 fields. Put in a PR some time ago. AH: Problem with operator in token? NH: Should be fine as long as within quotes. AK: Just a string search shouldn’t make a difference.
AK: Can’t get libaccessom2 to compile and link to correct netcdf library. Ben Menadue tried and worked ok for him. Problem with findnetCDF plugin for CMake. Not properly supported on NCI. Edited the CMake file to remove this, could find netCDF, but used different versions for include than linking. Should move to a newer version of netCDF. v4.7.1 has just been released. Have requested this be installed on NCI NH: Does supported include CMake infrastructure around library? If getting findnetCDF working was NCI responsibility that would be great. Difficult getting system library stuff working properly with CMake. CMake isn’t well supported in HPC environments. AK: Ben suggested adding logic to check and not use on NCI. NH: Definitely upgrade, to 4,7 if they install it.
AH: Didn’t Ben Menadue login as AK and it ran ok? AK: No, he didn’t do that as far as I know. AH: Definitely check there is nothing in .bashrc. Also worth checking if there is a csh login file that is sourced by the the csh build scripts.

OpenMPI testing

RY: OpenMPI 2,3,4 and Intel 2019. Consistent results between for all OpenMPI versions. 1, 0.25 and 0.1. Some differences between Intel 2017, not from MPI library. Not sure if difference is acceptable or not? Would like some help to check differences.
Just looking at access-om2.out differences. Maybe need to look at output file like ocean.nc? RF: Need to compile with strict floating point precision to get repro results. MOM is pretty good. Don’t know about CICE. Can’t use standard compilation options. fp-precise at a minimum.
RY: If this difference is not acceptable need to use flags to check difference between 2017 and 2019? RF: Once get a bit change, chaos and get divergence. RY: Intel 2017 still on new system. AH: So not only newest versions of modules on gadi? RY: 2017 will be there, but no system software built with it. AH: Done a lot of testing. Should be possible to just use 1 degree as a test to get 2017 and 2019 to agree. There are repro build targets in some of those build files. Could try and find them. RY: Yes please.
AK: Any difference in performance? RY: No big difference. NH: New machine? RY: No, old machine, with broadwell.
RY: NCI recently sent out gadi update and blog and webpage. 48 cores/node. NH: Did we think it was 64 cores/node? AH: Still 150K cores in gadi, with 30K of broadwell+skylake. Maybe have to change some decompositions. RY: Not the same as any existing processors.
AH: Two week overlap with gadi, then short will be read only on gadi. RF: There was panic in ACCESS due to an email that said short would disappear in mid October. AH: Easy to misread those dates.

accessom2 release strategy

AK: Harmonising accessom2 configurations. Somewhat haphazard release strategy, but not tested. Maybe master branch that is known good, and have a dev branch people can try if they want? Any thoughts?
NH: Good way is really time consuming and labor intensive. Would mean testing every new configuration. Not sure if we can do that. Tried to keep master of parent repo only references master of all the control experiments. Not sure if necessary or desirable? Maybe makes more sense to develop freely on own experiment and keep everything in control stable? Not sure. If all control experiments are stable and working, can be a bit slow to update. Just update your experiment.
AK: Some people are cloning directly from experiment repos, some cloning all of access-om2. Would reduce confusion if control directories under accessom2 are kept up to date with latest known good version. NH: Does make sense I guess. Shame for people to clone something that is broken which has already been fixed. There is some python code in utils directory which can update everything. Builds everything at all resolutions, copies to public space, updates all exes in config.yaml and does something with input directories. AK: I ended up writing up something like that myself.
AH: Should split out control dirs from access-om2 repo. Is a support burden to keep them synched. Not all users need entire repository, as using precompiled binaries. Tends to confuse people. NH: Did need a way for config to reference source code and vice versa. AH: Required to “publish” code? Maybe worth looking into. NH: Ideally from the experiment directories need to know what code you’re using. Probably got that covered. In config.yaml do reference the code and it’s in the executable as well. When run executable it prints out the hash from the source code. Enough to link them?
AH: I recall NH wanted to flip it around and have the source code part of the experiment. NH: Probably too confusing for users. AH: True, but a useful idea to help refine a goal and best way to achieve it.
AH: A dev branch is a good idea. Then you have the idea that this is the version that will replace the current master. Can then possibly entrain others into the testing. Users who want updates can test stuff, you can make a PR and detail testing that has been done.
NH: Good idea. Some documentation that says experiments have stable and dev. When people are aware and have a problem, wonder if they can go to dev, see if it fixes. AK: Bug fixes should go into master ASAP. Feature development is not so urgent. A bit gray, as sometimes people need a feature but they can work off dev. AH: Now have some process for this: hot fixes that go straight in. Other branches are dev/feature branches. Maybe always accumulate changes into dev. Any organisation helps.
NH: Re: Removing experiment repositories: namelists depend on source code. AK: Covered by executables defined in config.yaml. NH: Yes ok.

FAFMIP PR

RF: Did it work? It’s got a lot of merges. RF: Just two lines. Did a merge and pushed it to my branches on GitHub. AH: I’ll merge it in. Just wanted to check. AH: Can always make a new master branch that tracks the origin, check that out and pull in code from other branches. RF: Have a lot of other branches. AH: Can get very confusing.

payu restart issue

AH: Issue has resurfaced. I commented on #193, but didn’t look into the source of the problem. Should look into it rather than talk about it here.

FMS subrepo

AH: Still not done the testing on this. Been sick. Will try and get back to it.

Tenth update

AK: Andy done 50 years with RYF 90/91. Running stably. AH: What tilmestep? RF: Think he was using 600s. AK: 3 months / submit. Should ask for longer wall time limit. RF: Depends on how queues will be on new machine, what limits and what performance. AH: Talking about high temporal res output. AK: Putting out 3D daily prognostic fields. Want it for particle tracking. Including vertical velocity. Slowed it down a little bit. RF: More slowdown through ice. AK: No daily outputs from CICE.

CICE PIO

NH: Still in progress. AK: Also requires newer version of netCDF? NH: Requires specific version of netCDF. Needs parallel version. Not a parallel build for every version. AK: Has parallel for 4.6.1. RF: Bug in HDF5 library which it is linked to. Documented in PIO. Probably a bug we’re not going to trip. Doing a collective write, and some of the processors not taking part/writing no data. Fixed next version of HDF5 1.10.4? AH: Not a netCDF version so much as the HDF library it links to. RF: Yes. AH: So should make sure we ask for a version of netCDF that doesn’t have this bug? AK: Add to request.
RY: If want parallel version, use OpenMPI 3 or 4? AH: Good question! RY: All dependencies will be available and very easy to use. AH: This using spack? RY: Above spack and other stuff. Automatic builds with all possible combinations. AH: Using it for your builds? RY: We are requested to test and are now using. Difficult to create new versions currently. In transition difficult, but in new system should be fixed quite easily. AH: Should fix the various versions of OpenMPI with different compilers. RY: Yes. AH: Will have a compiler/OpenMPI toolchain? RY: Will automatically use correct MPI and compiler. AH: Any documentation? RY: Some preliminary, but not released. When gadi is up all this should be available.
AK: Should I ask for a specific version of MPI? RY: If don’t specify, will be built with 3 or 4. Do you gave a preference? AK: No, just want the version with performance and stability we need. Do we need to use the same MPI version across all components. RY: Not necessarily. Good time to try OpenMPI3. No performance benefit as system hardware is still old hardware.

Technical Working Group Meeting, July 2019

Minutes

Date: 1tth July, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) RSES ANU, Andrew Kiss (AK)  COSIMA ANU
  • Russ Fiedler (RF) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Peter Dobrohotoff (PD), CSIRO Aspendale
  • Marshall Ward (MW) GFDL
  • Nich Hannah (NH), Double Precision

Config checking

AH: Made payu configuration checker. Include safety checks for synching scripts in BASH scripts. Interested in checking for bad namelist options. Russ any specific bad ones.
RF: Kpp kbl standard method should be false, red sea fix used in access-cm. For a CM model maybe warn. OM model just not allowed. AK: nprocs and ncpus driver issue?
NH: diag_step checks? To frequent ok for low res, bad for high. RF: For production runs don’t want? But how you diagnose problems. Best way to find how things are going wrong. Maurice’s issue was trivial to spot. AH: Definitely say don’t want debug_this_module turned on. RF: diag_table debug turned on, should be turned off. Creates huge numbers of messages. AK: Setting up new updated configs. Ten configs. Make them more homogenised. Fixing all these things as they go. AH: These things will get changed by mistake. Don’t have enough people to keep checking things. Doesn’t scale. This will allow new users to submit a config that at least passes these checks, also gives others confidence to change config knowing they have produced something that meets some minimum standard, appropriate for public facing production.

Tenth Run

AK: Andy Hogg is running ACCESS-OM2-01 with JRA-55-do RYF90/91, seems to have smaller biases than previous repeat year (84/85). Currently 13 years. When it does die it is CICE CFL problem. Sometimes the same date, subsequent years didn’t occur. Checked dates. Storm goes near tripole. Not currently messing with forcing winds. Did this with 85/85 but this doesn’t seem as bad, so haven’t done it so far. One drawback doesn’t run at dt=600s, but takes 2.55 hours to do 3 months. With dt=600s could do 6 month submits, which would mean less queue wait. Should be straightforward to fix the winds to enable this. AH: Not a priority considering extra SU cost. AK: About 10%. Cost of losing 6 month submit halfway through > 10%. AH: Shame it is the tail wagging the dog. AK: Could ask for 6 hour limit from NCI? AH: Worth trying. Done it before, and have seen others with increased limits. Prefer to do it, but time limited, and just for one project. AK: Hopefully limits will change with new machine. Currently 65KSU/3mths. RF: MOM or CICE bound? AK: Fraction of time MOM is waiting 2-3%. RF: Not greatly MOM bound. Throw a few more processors at MOM to get it to run less than 2.5 hours?
AK: 40 years. IAF not split, just start from climatology. AH: When will IAF start? AK: No plans, not simultaneously run RYF and IAF.

NCI update

AH: Attended an NCI scheme manager meeting. Mostly about new storage scheme for short term storage. Push came from CSIRO to change to scratch model, but some others in CSIRO not happy. PD: Wasn’t aware that was being driven from this end. Maybe further up the food chain.
AH: Change to time-limited scratch, or a tidal model deleting oldest data first. Maybe a split scheme with old style short on one disk, time limited on another, but not a lot of appetite for that.
RY: First stage November. Our group look for HPC application for new machine. Already have ACCESS-OM from Andy Hogg to look into software state. New machine some old library will not be maintained.

OpenMPI3 and ACCESS-OM2

Recently used ACCESS-OM2 with OpenMPI 3.0. Seems to hang? Know this issue? Or avoid 3.0? Some work required to run on new machine. Spend some time on this work.
AH: Marshall any ideas? MW: Have tried 3.0.0 3.0.1, maybe 3.1.1. Earlier ones didn’t work then got fixed. Newest 3.x should work. RY: Tried 3.1.3, MOM keeps hanging until finish of job. Should finish at 40min. Keeps hanging. 1.10.2 works. 3.1.3 hanging.  MW: Sure I got it running. Will make sure they are in repo. RY: Catch up with your personally? MW: where it hung should tell you. RY: Talk later offline.
AH: Definitely need it working on the new machine. MW: No work needed to be done, it just worked. AH: What changes would you have made? MW: Just versions, environment file and flags. Maybe using some of the alltoallw changes, but I don’t think that was a deal-breaker.
AH: What is the minimum version OpenMPI supported on new machine. RY: Under discussion. System guys will decide. Haven to prepare for any. Not sure OpenMPI 1.10 will still be supported. Don’t know. AH: Likely to be OpenMPI 3.x+? RY: New machine with new architecture. Performance enhancements with new architecture. MW: What arch? RY: Now have skylake. Newer than Skylake. MW: Intel architecture, not Ryzen. AVX512 can’t benefit. fma which we already have AH: AVX512 because can’t vectorise enough? MW: Currently vectorising, but bandwidth limited. Ryzen has better bandwidth. RY: Not announced. No idea. AH: At scheme managers meeting it was an Intel chip. Told it was November when they commission new nodes, take equivalent raijin nodes offline. Iron out the bugs, and early next year will turn off the rest of raijin and turn on the rest of the new machine and at that point it will be larger than raijin now, but not a huge increase in compute. AH: Thanks for bringing that up Rui, as we definitely need to keep an eye on this for the new machine.
MW: Apparently used 3.0.3. Maybe a reference point to start with. RY: Start with 3.0.3? MW: Whole space is volatile, some 3.0.* series work some don’t. But start with 3.0.3 and Intel19.
AH: Would be nice to have a spack like build tool so can say for certain what was run. MW: payu build! AH: spack written by a smart guy from TACC, and lots people use it, and they still have a lot of issues. Not an easy problem to solve. MW: Dale was keen on it. AH: When we met with Dale he was thinking to have spack as a tool preconfigured with compiler toolchains that we can build our tools from. RY: Dale is very busy getting new for the new machine.

Splitting off FMS

AH: Been working on Cmake to compile FMS separately from MOM. Been using the FMS fork in mom-ocean repo with your alltoallw changes. MW: Also a branch on the GFDL repo with those changes.
AH: How to organise the FMS fork? Have a branch that tracks GFDL and master contains our local changes? Could have a branch called gfdlmaster, could have our master branch exactly track the GFDL FMS. Any opinions on how to organise this? MW: Don’t want to use GFDL FMS? AH: I want an easy way to update FMS without touching MOM source tree. MW: Want to get FMS out of MOM? AH: Yes. MW: And want to know how to refer to FMS you want to use? AH: FMS we want to use is a fork on mom-ocean. Gives flexibility to add changes when we need to.
MW: Best to have your own FMS fork. GFDL don’t want to support anything but for GFDL, including MOM5. Don’t really want to get involved in supporting other projects. Will be receptive. No harm in using FMS repo straight, but if doing anything with FMS better off maintaining own version and update as see fit. Don’t see compatibility with older models as priority. Planning a big IO rewrite. Wouldn’t be surprised if it starts breaking and not salvageable.
AH: alltoallw we definitely want on our architecture as we’ve had issues in the past? MW: A lot of work, return not what I’d hope. Latest MPI version bigger impact. Are cases with speed up, but such an infrequent operation not such a big deal. AH: Stopped initialisation hangs? MW: Yes, some rare scenarios where they did alltoall with point to points that broke a lot. In OpenMPI 2.0/3.0 and later they changed something, scenario no longer happened. Segfaulted before, now properly checking. Only necessary for 1.10. It is better, as collectives are generally more responsible. May become necessary, assuming 3.0.3 works.
AH: If want alltoallw, would keep a branch with those changes and rebase on to gfdl master. This would be a well documented branch, or branches, and a well documented way of applying those changes when an update is required.
MW: Can CMake build as libfms and link to MOM when you build it. No submodules, rely on Cmake. Does that work? AH: FMS is not suitable to be a loadable module. Get OpenMPI conflicts, best to build at the same time with the same compiler toolchain. There is a new Cmake tool called FetchContent that can grab a repository and it behaves like it is physically in the source tree. Works well, but not great versioning. MW: Isn’t Nic already doing something like this for ACCESS-OM2 to pull in specific versions of son-fortran. AH: Yes, you can specify a library git hash. The only thing stopping it from working is relocating the versioning string stuff Nic did as it is currently sitting in the FMS directory, and that is going to disappear. Needs it’s own directory, maybe ocean_shared? RF: ocean_shared is used for other tracers. MW: should not use that name. AH: Ok, will make a new directory called version. Can recreate the sed script functionality that is currently in the build script in Cmake using template files. Quite a clean solution. I have a cmake branch on the MOM5 repo and FMS fork on mom-ocean, will get them compiling properly and working properly together. There is a way forward.
MW: Alistair is pretty interested, might be a template for MOM6. AH: Angus already did this for MOM6? MW: Angus, is what you did still viable? AG: Haven’t tried recently, don’t know why it wouldn’t work. Replicating mkmf process in CMake. MW: Automake is not good and won’t touch it. AH: Surprised there was no way to build FMS from the FMS repo. Relies on being imported into another project that knows how to build it. Not sure it is great that a project can’t build itself. MW: CMake support not widespread enough? Not available everywhere? AG: Updates frequently, can have features that break old versions. Used in a lot of projects. Surprised if it went away. AH: Cmake can be brilliant, but also terrible, but better than mkmf. MW: mkfmf is doing two jobs, importing stuff and working out dependencies. Does work well for the latter job. Set a high bar. AH: Haven’t done proper comparisons, but Cmake seems to better for dependencies. Can do parallel builds with Cmake you can’t with mkmf. MW: mkmf just generates a makefile, which is already parallel. AH: So does cmake AG: doesn’t seem like a good makefile, don’t know if the dependency tree is deficient. Rebuilds too much even after touching a single file. MW: if CMake intelligently supports mod files then it is fantastic. AG: Has native fortran support. AH: Speed point of view, Cmake is better. Generated correct dependencies so that parallel compilation worked. Couldn’t do that with mkmf. Also had compilation cascade issues. MW: I build 5 exes at once, so it always looks fast to me. AG: MW same makefile gen as mkmf. MW: More readable makefile than automake? AG: Yes. More readable than automake. AH: When the magic works Cmake is great, when it doesn’t it is a pain, but the magic is worth it. Also supports multiple architectures.

Codebase

RF: Aidan can you approve change to FAFMIP. Starting to get conflicts. Ryans changes put it all in conflict. Riccardo has disappeared, but Fabio’s changes so it is all the same bit for bit. AH: Current conflict in ocean_frazil. RF: Because you put Ryan’s changes in. AH: Sorry. Could rebase on Ryan’s changes. Maybe pull in Ryan’s changes. AG: Could check out the branch, make changes and push to the branch. AH: I’ll try doing it directly on GitHub, get back to you about it. RF: Get that done and I can finish up some of the WOMBAT stuff. With the ESM model I also have to make some changes to CICE. A couple of design things with the number of fields that are passed. Hard wired at the moment. A couple of issues there. Have a chat at a later stage. Rather than hard wire fields, flexibility, test error codes, make compatible with namcouple, so can be done on the fly. Also feed into BGC Hakase is putting into CICE. Need to pass BGC fields between the two modules. Rather than having a plethora of drivers, or CPP directives, better ways to do it.
AH: Made that change on GitHub and merged it. Once checks are finished will accept the PR.
MW: Been working on a test with MOM6, where we turn of every diagnostic, fantastic for finding bugs. Found nearly 2 dozen bugs. Don’t actually register the diagnostics with FMS, just spoof the whole thing at the diag_mediator level, which is a wrapper around the diag manager. Interesting if this could be translated to MOM5. Don’t know a natural way to do it, but might be worth some thought at some point. RF: Code you’re putting into MOM6, not the diagnostic manager? MW: Yes. FMS moves too slow, very conservative, don’t have a robust test framework so are worried about putting in changes. There are some hints that maybe this code could be shared with MOM5. Lots more in there than just this. Just raising it as food for thought. AK: Put as an issue? MW: Opposed to those sorts of issues, but you can if you want.
AK: Want to set up new vanilla reference versions of the 1 and 0.25 deg ACCESS-OM2 models. The forcing on those use 2nd order conservative interpolation. There are overshoots for some fields which have to be positive definite. Would like 1st order conservative for some fields. Do they exist? NH: They should be there, we were using 1st order for a long time, and should be in the input directory. Not sure how well they are named. Should say in the filename, have a look and if you can’t find them we can recreate them.

Technical Working Group Meeting, June 2019

Minutes

Date: 19th June, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Russ Fiedler (RF), Matt Chamberlain(MC) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Peter Dobrohotoff (PD), CSIRO Aspendale
  • James Monroe (JM) Memorial University

FAFMIP

RF: FAFMIP into MOM. Riccardo will do his tests. Don’t expect issues. AH: Did Fabio notice problems? RF: Started with ice formation used by ACCESS wasn’t coded up. Did that and then noticed way things were being done didn’t match with what was in the literature. Mismatch between what Griffies did and Riccardo wrote. Now at a stage where that is now consistent. Talking with Trevor McDougall about equation of state. Coded in MOM not totally consistent with what protocol says should be done. All groups do it a little differently. How badly can we violate the freezing condition and still get reasonable results. If you do this incorrectly can fall below freezing and not form frazil. Behaves ok down to -3 degrees. Hopefully won’t get that far. Are other approaches, have to have a think about that. Will stick with what is done currently. AH: Modifications? RF: Look at other mods to see if we can do it more consistently. AK: More consistently without additional tracer? RF: Still need additional tracer, but more consistent, temp and redistributed heat tracers see the same values of frazil. The way Griffies et al constructed get slightly different values. Not completely clean. Can’t get runwaway with one of the tracers. Safe but not right way. Other ways: fix problem with implicit diffusion. Code as it stands is at least consistent with what has been written up. MC: None of these full TEOS-10? RF: Yes TEOS-10. Also had to fix the conversions to potential temperature. MC: Dealing with salinity etc? RF: Simplified version. Need these changes to do FAFMIP correctly.
AH: Any other ramifications? RF: None. All changes only take place in this style of experiment. Everything separate from other experiments. Only issue was prognostic versus pot temp.
AH: Merged independent of the WOMBAT stuff? RF: No. WOMBAT stuff relies on changes on ocean_sbc. Have to rebase. Get FAFMIP in first.

WOMBAT

 RF: Haven’t had a chance to sit with Matear and test it properly. Just a few changes needed from current code. Hopefully pin down Matear. AH: Hakase with WOMBAT  in tenth? RF: Yes. Hakase will test. Currently inputting winds via a file rather than in through coupler. MC: Richard Matear is working directly with Hakase.
RF: Few lines in the coupler that I have to add and a namelist item. In namcouple file need to pass 10m winds. It is in CM2 code, but not in OM2. AH: Can Hakase work with ice BGC stuff in his current setup? Is this slowing him down? RF: No idea.
AH: Few weeks? RF: Have to rebase WOMBAT stuff.

CICE Mushy ice

RF: Code suddenly got changed and altered and no-one knew why? AH: Nick been keeping our codebase up to CICE6. RF: He made other changes that caused problems. That code also moved to CICE5 svn repository. AH: Backporting to CICE5? A lot of assumed logic in those code changes. RF: Have to familiar with POP code makes salinity changes. Doesn’t go through the surface like MOM. The clause where

ktherm=2

“this is done elsewhere”, not true for all models. Nowhere in the code those salt fluxes are being calculated. AK: Proof in runs, results show drift. RF: Looking at it, needs that if clause removed for coupling to MOM. AH: We’re not part of any CICE6 test suite so they can’t spot errors. AK: Elizabeth Hunke said consortium was open, anyone can join. Have a comprehensive testing regimen. Get more involved so they test our use cases? AH: Definitely need more oversight on code changes into CICE. JM: Any testing when code changes added to CICE5? AH: Not currently no. Nic has some scheduled Jenkins tests but not sure on the status of those.

AK: Hit problem as using mushy ice. Wouldn’t see it otherwise. Using to overcome bug in other scheme, but don’t really want to use it. Slow, don’t need. AH: Can we fix it? AK: Iterative solver fails in high res case. Happens in fresh water regions with low ice concentration. Had intended to dig down more. AH: Would struggle to find this bug anyway as we wouldn’t routinely test tenth.
AH: Fixed now. AK: Not sure about any other problems with changing parameter setting. Took a lot of digging. AH: don’t want science changes without reason

Ob runoff

AK: Not sure how important this is. Shows how runoff code can fail. Cut away a lot of the Ob estuary due to small grid cells causing instabilities. Runoff is done on the fly. Find all runoff that is on land, move to  nearest coastline. Then check for high runoff and spread out if over threshold. Some runoff goes to embayment to the west. Changes to the Ob means that is the nearest bit of ocean. GitHub issue
Not sure how important it is. Similar issue with spreading out. Uses kdtree to find neighbouring points. Doesn’t account if there is land between those points. JM: Can tunnel. AK: What could be done to make land impassable.
JM: Resolution on that discussion?
AK: Not sure high enough priority to spend time on. AH: Use connectivity? Like used to find isolated water bodies. Move land runoff to nearest connected wet cell. AK: Depends on runoff being ocean in the first place? AH: Yes. RF: If can get to right place and just smear it out and use neighbouring ocean points. AH: Is all JRA55 runoff currently on a wet cell on the JRA55 grid? AK: Don’t know if it is a wet cell, it is on the coast. AH: Need to look into that.
AH: How important? AK: Not paying close attention Arctic. Correct volume of fresh water, just in slightly wrong location. Already severe liberties at that location.. Points to failure mode of this method. Can cross land.

Splitting FMS and other components

AH: You want to talk about other components as well Russ?
RF: If we start doing things like that to MOM repo. Will that affect anyone else who already has stuff from there? Cause problems if they want to update if we move to different setup?
AK:  Proposal to put FMS codebase into different repo? AH: Yes. AH: Can’t compile without pulling from another repo. RF: Not sure how it would all work. Use submodules? JM: In submodule right now? RF: Not for MOM5.
AH: I proposed to use CMake to create an alternate way of compiling to pull in those libraries from external repos. Could keep the FMS directory in the repo, but at some point the MOM5 code may use features in an updated FMS that are incompatible. However, they can always pull from a previous commit. Could tag a commit as the last one that had FMS included. Marshall did update FMS in the past. Desirable to go this way, to have a tighter coupling with changes in FMS, put in pull requests to main repo for features we want.
AH: Got CMake working for half the builds. Super simple to swap out external library, already compile it separately. Will finish this so people can test as proof of concept.

Langmuir KPP

AK: Progress with ACCESS-CM2. Turned on langmuir param for kpp and improved Antarctic intermediate water. Should we turn it on for OM2? RF: Our coupled runs got improvement in southern ocean. Getting shallow summer mixed layers. Helped deepen them a little bit. Different types of simulations, but work in the right direction.
RF: Not sure if that is an issue mixed layers in southern ocean over summer? If shallow, could be good. AH: Turn on/off or parameter? RF: Just turn on/off. Pretty sure I changed ACCESSOM2 to get wind coming through. Might need change in namcouple. AK: Need wind velocity as well as stress through compiler? RF: Two ways. Both have been enabled. Standard to pass 10m winds as well as stresses. Other way, if don’t pass winds, flag in kpp scheme can derive 10m winds. MOM6 does it that way. Pass through stress and calculated 10m. AH: Would still work without passing winds? RF: If forcing model with stresses and don’t have winds, this is an alternate way. Not being used currently as most models can pass wind.
AK: Might be a good time to compare OM2 and CM2. Perhaps there are beneficial changes from one or the other? Might just be model specific changes?  AH: How would this happen? AK: Maybe a meeting. Sent an email to Dave and Peter. Look at the namelists and input files.

Other updates

PD: Not up too much. Interested in getting models aligned and best outcomes for both. Maybe have a small VC and discuss. A fairly complicated set of outputs, suites etc. Can be difficult navigating this structure. Definitely encourage talking about it.
AH: What is the status of your runs? PD: PI control is up to year 950. A lot of that is pinup. Historical forked around yr 900, and a 4x historical. This is CM2. No carbon cycle. Two submissions, ACCESS-ESM-1.5. Old atmosphere, cice, updated MOM. ACCESS-CM2 is much newer atmosphere, full aerosol scheme, 5-6x slower, but no carbon cycle. ESM is a lot further along. CM2 is  not as advanced. Took some time to reach equilibrium. AH: Happy with results? PD: Yeah, seems pretty  good. Climate sensitivity seems about right. Sensitivity is a lot higher for CMIP6 than CMIP5.
JM: Will attend meetings going forward. To complement some stuff Angus is doing on the cookbook.

Technical Working Group Meeting, May 2019

Minutes

Date: 15th May, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Marshall Ward (MW) GFDL
  • Russ Fiedler (RF), Matt Chamberlain(MC) CSIRO Hobart
  • Nic Hannah  (NH) Double Precision
  • Rui Yang (RY) NCI

Agenda

– Follow up on migrating FMS to an external library
– WOMBAT in harmonised MOM update and testing
– Tenth load balancing
– CICE IO bound in high core counts

CICE IO bound in high core counts

AK: Runs with new CICE executables NH compiled a while ago. Performance slowdown with compression level 5. Tested with level1 few % larger in size, 2500s -> 1800s for IO time. 1300s without compression. Compresses well with low value because a lot of missing data with ice.
NH: Went from netCDF3 to netCDF4. Might be worth trying no compression. AK: have a run with compression level zero. RF: Does impact on walltime. MOM is waiting. Usually have CICE waiting on MOM, but when outputting is the other way. MW: Compressing MOM before, now both? NH: Compressing and daily output an issue. AH: What is the chunking? RF: Uses default. AH: Some libraries chose weird values for time value? RF: No funny business, all sensible. RF: All these point to point gather, maybe not efficient. MW: Do you know where the time taken is? RF: Slowdown, but not sure split between gather and write. NH: Breaking new ground, daily output and running at scale, and unusual tile distribution. Increases the COMMS to gather. So many different new things. MW: On sect robin still? AH: 10% of total runtime.
NH: With MOM do all this with post-processing to get performance of model best as possible. Anything we do slowing model as whole, should post-process. Didn’t think about that option when put change in. If slowing down as a whole, back out change and work out post-processing step. AK: Half the data in daily files is static. Totally unnecessary. Made issue to maybe output static data to a file once. RF: Aggregate daily files to monthly? AK: Slows down output from model. Less compressible? RF: Highly correlated, will compress easily. AH: How much extra wait time? RF: The whole write time. AK: 25 or 18% in MOM runtime. AH: Monthly output issue disappears? RF: Yes. RY: CICE write to single file? RF: Yes through one processor. RY: Can we do it like MOM, each processor writes data to it’s own file. NH: Yes, good idea, but more complicated than MOM. CICE tiles are not located close to each other in space. RF: Could use PIO interface. Not compatible with centrally installed netCDF libraries. Bugs in version of HDF. Need OpenMPI > 1.10.4  and netCDF > 4.6.1. MW: PIO good candidate, RY can help. CICE developers looking into this? Stayed in touch with them? NH: Look at CICE6 GitHub. RF: Looked, but no active development on IO in any fundamental way.
NH: If we did decide to go that way, good opportunity to feed that back to CICE community.
MW: NCAR as a developer of PIO, keen to get it into other models. If CICE is on their radar might get some feedback there. RY: MOM has IO layer a bit like PIO. MW: Not a good idea to use PIO in MOM6.
RY: Tried PIO in MOM and found it was not a good candidate. MW: Yeah, MOM6 was already doing something like that.
RY: Parallel compression will be supported in future in netCDF.
RY: Been experimenting with my own version of library and got some positive results.
End result: take compression out, take out static fields. Post processing. Is anyone using daily fields. RF: We’re interested in daily ice fields. Using data assimilation. MW: Shorter runs though? RF: 20 years.
NH: Instead of writing individual daily files, should write to a single file, static fields won’t be replicated, maybe benefit from some netCDF buffering. AH: Big code change? NH: Not sure. AK: Has a file naming convention for different frequencies. Frequency part of filename. NH: Saying could already output daily into monthly files? AK: No, filename encodes time and frequency. Doesn’t seem to write repeatedly to any of it ’s output files. AH: Define unlimited dimension.
NH: Make a GitHub issue. If high priority could get some time. MW: Make the issue in the CICE repo, inform them what we’re doing. They mentioned an NCAR community board.
AH: Make a namelist option and recompile? Compression level as option?

Tenth load balancing

AK: RF suggested a smaller core count of 799. Doesn’t change wall time which is a win. How low can we go? RF: Worked out a few more configs. Slight change of tile size, 720 would be ok. 36×36 or 40×30.. Running some quick tests with tool under /short/v45/masking. Run and output masks and where tiles get located. Also number of processors/blocks you need. AH: Put code on COSIMA GitHub? RF: Just a quick little thing. AH:  Yes but useful.
AH: Down from 1380. Big win. Total core count? AK: not sure. RF: Total just over 5000. AH: Still running on normalbw? AK: Yes. AH: Wait on normal crazy. RF: Look at skylake? Usually empty. RY: Yes new nodes, not large total core count. AK: Get 6mo/submit without daily outputs. Daily over by 30/45mins with ice. dt=600s.
NH: If no-one else to fix, and no-one else to fix, assign NH to issue.

WOMBAT

RF: Got Matear up to speed. Ran a few tests. One or two bugs yet to be fixed. A couple of fields that weren’t coming through from OASIS properly. Was the ice field, wasn’ t coming through correctly. Got it going with external fields forcing it. Figured out changes to get it running properly with full ACCESS mode. Running some tests cases after bugs fixed. MC: Now running with calculated gas exchange coefficients. RF: The way it was originally written the way fields were ingested into MOM. MC: Using the same wind field in BGC and wind mixing? RF: Yes, all together. MC: Level of the wind? In ACCESS-ESM was getting lowest atmospheric wind. MC: CICE will send a 10m wind through OASIS? RF: Not FMS coupler, this is just OASIS 10m wind. MC: ACCESS-ESM case?
AH: Hakase could be used as a guinea pig. Any of these changes affect ACCESS-CM2? RF: Shouldn’t. AH: Do we need to do any bit repro tests? RF: Shouldn’t change anything.

migrating FMS to an external library

AH: I put my hand up to do the change and test.
MW: FMS updated to Xanadu a couple of weeks ago. AH: So a good time to try it out. MW: Already tried it, put some MOM patches in to fix some issues. AH: On the GFDL FMS repo? MW: They have opted not to take the parallel netCDF using MPI IO patch RY and I worked on. Have set up a branch with parallel IO, and Xanadu has been merged into that branch. May want to use branch with parallel netCDF extensions. Ongoing conversation with this. They may merge it in. Can use what you want. Your call as to what to use.
RF: Any whitespace issues? MW: FMS and MOM6 live on different planets. They don’t interact much. Don’t collaborate with FMS guys.
MW: Alistair getting miffed at the red buttons on the jenkins server. He/I will look at some GFDL independent solution. Happy for NH to be involved as much or as a little as he wants. NH: They should be more blue than red. MW: Happened in March due to checksumming? NH: Bitrot, Jenkins is fragile. Scott often fixes it. Good idea, happy to help in any way. May be easier to set up on raijin. Does one qsub and runs them all under one sub. MW: slurm is sort of designed to do that. NH: slurm is awesome. MW: slurm is better. NH: like it a lot more. MW: Good for running multiple jobs per submission. Blurs the line between MPI and scheduler. Some sort of meta-scheduling. Place jobs on ranks within the request. AH: More flexibility.

Actions

  • Update MOM build to use external FMS library (CMake) – AH
  • Finish WOMBAT integration – RF
  • Make CICE compression issues – AK

Technical Working Group Meeting, April 2019

Minutes

Date: 10th April, 2019
Attendees:

  • Aidan Heerdegen (AH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Marshall Ward (MW) GFDL
  • Russ Fiedler (RF), Matt Chamberlain(MC) CSIRO Hobart
  • Nic Hannah (Double Precision)

Updates

MW: Discovered travis test GFDL uses for FMS has been failing for six months. MW fault. Introduced new MPI function. Function doesn’t exist openMPI 1.6, which is what travis uses. Doesn’t show up in MOM5 as changes not in there. Solution was to switch to MPICH. Bleeding edge travis only uses openMPI 1.10.

FMS in MOM5

AH: Link FMS rather than have in repo. NH: I agree. MW: Still think subtree best solution. Now FMS has dedicated automake build can formally install as module? MW: Had a long chat about this with Alistair. Not hot on submodule/subtree. NH: Just have in CMake or a script. AH: Makes sense.
RF: One of my jobs with decadal project is to have MOM5 linked with AM4. Uses a more recent version of FMS. Will be useful for RF. Have to get MOM5 talking with FMS used in AM4. MW: Have run MOM5 with latest FMS. RF: Just making sure no surprises, changes of interfaces. Not just FMS, also other bits and pieces. AH: Auto testing with multiple FMS.
MW: If we go path of building independent libraries, not sure how C world tracks this? ABI changes? How do you manage binary compatibilities? NH: Did that with OASIS, and not sure it was worthwhile. MW: C programs don’t seem to have these issues. NH: Using precompiled libs necessary in linux, for us not reason not to compile FMS when compiling MOM. MW: Not keeping public library? NH: More complexity than necessary. We’re just talking about splitting source code out into separate repo. Good idea. MW: MOM6 has FMS repo, and a macro repo above that the builds everything. Not sure we want to go that way. NH: access om2 works that way too but experiment repos are separate. MW: Maybe submodules / subtrees aren’t so bad. NH: MOM5 repo can have a build script that references a build script for FMS. MW: Doesn’t CMake have some functionality to check it out for you.
AH: Finish off CMake build scripts and add in FMS stuff.
MW: What they do with MOM6 is having issues. Will bring up with them. Maybe some convergence on library dependencies. AH: Don’t favour central lib install with MPI dependencies.

WOMBAT in MOM5

RF: Not much to report. Make sure WOMBAT can be called in MOM-SIS. Only outstanding issue. Just changing a few if statements. MC: Comfortable that it will run in ESM framework? RF: Not sure who is going to test? MC: OM2? RF: Should run in OM2.
MC: Richard Matear went to visit AH. I haven’t run anything yet. With experiments running under payu ready to go. OM2 test with WOMBAT.
AH: Is there a PR for these code changes? Make a PR. RF: Maybe said to do that. Split up testing to avoid duplication.
MC: Hakase wants to run this too I believe?

Tenth Model

AK: Set up RYF for Spence. Run 20 years. Looking at test bed for improved config. Improved bathy from RF. Conservative temp. Running at half the cost of previous config. 10Mh/yr, 60-65 KSU. Speed up from higher time ocean tilmestep, and ice is now 2 time steps per ocean timestep, compared to 3. Due to removal of fine cells in bathymetry in tripole. Wanted to use non-mushy ice, but low ice con in Baltic fails to converge thermodynamic temp profile. Should converge in a few steps, but limit at 100 and still doesn’t converge. Paul is using mushy. Had a run with non-mushy up top crash. Spinup7 is mushy spinup8 is non-mushy. 10-15% extra cost for mushy ice. Not sure if we’re CICE or MOM bound. Other resolutions are not using mushy. Want to set up an IAF tenth run starting in 1958. Can afford with cheaper model.
AK: TEOS-10, not sure if we want to use. Need absolute salinity and cons temp.
AK: Noticed gyres are much too weak in all resolutions. Looks have to careful with JRA55. Did a test with 0.25 with abs wind rather than relative. No change. Florida current 65%, EAC about 70%. Gulf stream is not separating properly. Mean position ok, but to variable. Maybe insufficient momentum. Doesn’t go around grand banks properly. Causes SST biases. Not sure how much to fix before IAF.
NH: All resolutions? AK: All resolutions are too weak. Gulf stream separartion is ok in tenth in average, but too much variance. Mean position in 0.25 is really bad. Biases around grand banks similar in all resolutions. NH: Improving separation improve biases? AK: Maybe. SSH is localised in model, but stretched out in obs.
NH: Is this specific to JRA55? Does it happen with CORE forcing as well? AK: Don’t know. Griffies said others find gyres a bit weak with JRA55. It uses scatterometer winds, which are relative to an eddying ocean. Not in the same location as a model. JRA55 paper suggest adding climatological mean current to the wind to force the ocean. AH: Should that be in the product? AK: Griffies says people aren’t too keen on Sujino suggestion. AH: Diagnose wind stress from 0l25 test? AK: yes. 10-20% change in stress in western boundary currents and southern ocean where large mean currents. Stress changes are in quite small areas, not a big effect on gyres.
MC: Do you recall what the EAC numbers were in model compared to obs. I thought we had 20Sv which is similar to OFAM/BRAN. AK: Obs: 18.7, 17.5 and 17.2 Sv about 2000m in models. 22.1 pm 7.5 from a mooring. Florida current is 30% too low. Well observed.
NH: What is big challenge in future? AK: Not sure how much to change before next IAF. Will put out a call for diagnostics. Also explain config and see if people have an issue with that.
AH: Doesn’t MOM5 not fully support TEOS-10? RF: Not obvious to user that can use TEOS-10. Kind of fudged. Proper way is to carry an extra tracer. Have preformed salinity and an adjustment factor to create abs salinity. Another way is to have abs salinity as a single variable and adjustment factor is zero. To use full TEOS-10 in MOM5, need 2 tracers. If you do it the same as the rest of the world would have a zero tracer. Don’t want a wasteful tracer.
RF: There is a newer way to parameterise the equation of state. Need updated. AH: New module? RF: Yes, just switch.

FAFMIP errors in ACCESS-OM2

RF: Frazil not being redistributed. Needs fixing. AH: Affect other runs? RF: No just FAFMIP.

Technical Working Group Meeting, February 2019

Minutes

Date: 14th February, 2019
Attendees:

  • Marshall Ward (MW) (Chair) NCI
  • Aidan Heerdegen (AH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Russ Fiedler (RF), Matt Chamberlain(MC) CSIRO Hobart
  • Peter Dobrohotoff (PD), CSIRO Aspendale

TWG Meta Stuff

AH will redo MOM5 governance doc for next meeting.
AH finding minutes a burden, MW suggested exploring other options.
MW: Will leave at the end of March. Will maybe try and attend. Given time.
Even in anarchy someone has to send out the email.

CICE Meeting

MW: CICE meeting. AK going. Going to Hobart Ocean Workshop? MC & RF not registered, might drop in.
MW: Also a VC: chat with Elizabeth Hunke. Who is going to attend? Just me? AK: Yes. MW: AH  come too? Ben Evans asked Rui to come. Not sure about NH. Assume interested.
MW: What to ask her about? Agenda? What motivated it? AK: Just that Elizabeth is around and could chat. AK: Any point me turning up a day before, more talking about Petra, but Petra thought it might be useful. Could show her how we set stuff up, some results?
RF: Anyone from Aspendale coming down? PD: Not sure. MC: Simon Marsland and Siobhan on the attendance list.
MW: Might ask about using latest GitHub branch (cice6). If we were to use it what should we do? Incorporate changes from OM2 codebase? Others more interested in physics?
AK: Might be interested in scaling work. Hoping to put some in my talk. MW: Fine with me.
MW: Not done as much as Tony Craig (?) on load balancing.
Monday 18th @3pm with Elizabeth (2 hours)
AK: Valuable networking opportunity.
MW: Would be great for NH to come.
MW: Maybe AK give a run down of some of the runs, start from there.

MOM5 Pull Requests

MW: RF been busy
RF: Bug in one of those in GW scheme. Was testing temperature in the wrong direction. Also something odd happens to temp rebinning at the bottom of a level compared to density. Missing value is zero. Interpolates first non-zero temperature to below bottom level. Because density in the rock is zero, can’t get a bounding. Problem with the way the diagnostic is originally done.
RF: Calculates transport in density one don’t account for transport in lower half of bottom cell, but temperature remapping you do. MW: Haven’t looked at the patch yet. Is this what Ryan Holmes was asking about? RF: This would speed up Ryan’s remapping. His PR was different. Trying to remap onto different levels. He sort of fudged the code. Take code from remapping onto density levels, and made something spoof, pretends neutral density is temp or salt. Don’t like what he’s done. Probably works, but not totally sure, but my optimisations might break some of the things he does. AH: Your optimisations are field dependent? RF: Yes. Assume it is density, with assumption density increases as you get deeper.  MW: He added a neutral density thing? RF: Trying to trick the code into something else.
RF: Can’t do it on more than one variable.
AH: Might be worth telling Ryan this might break his code.
RF: I thought he had put the commit in there. AH: No deleted the PR. He doesn’t have commit rights.
MW: Has a hard coded neutral density point that he has defined.
AH: RF still thought worthwhile? RF: Yeah, have a general thing, remap to level? A lot of code would be copy/paste. Could be a lot of work. AH: classes of rebinning?
MW: Not sure I understand exactly what RF’s commit does. Not sure I can add value.
RF: Just a lot faster.
AH: How did you pick up the error? RF: Was worried about it. Hadn’t checked rebinning to temperature. Wasn’t sure I had accounted for reverse in signs. In transport beta _ gm. Neutral physics utilities module. Checking for maximum and minimum temperatures on wrong levels. Hadn’t tested that diagnostic. Missed temperature. When tested failed. Doesn’t alter results of simulation, diagnostic slightly wrong. Other things were bit repro, all checksums were identical.
AH: So when code changes are made to diagnostics make sure those diagnostics. Make sure we paste in pics of diagnostics. Made sure to double precision in `diag_table`.

MOM5 Governance

Last month agreed to tackle PRs. MW: Paul never answered. Other didn’t answer. AH: AK didn’t answer! ?
MW: A lot of weird hard constants in FMS. Data structures are weird.
MW: Other PRs when we got no answer? Ask for an update without interaction without a month? Have some policy? Paul looks more valuable. Other one is more FMS. Could call phone.
General approach for non-responding PRs: Get in contact again. Warn it will be closed. Close and say they can reopen.
MW: Sometimes got good ideas with poor implementation, accepted and completed reimplemented. RF: Short one best to redo a different way, and reject the FMS stuff. Contact Paul and get it done?
MW: No answer after prolonged time, incorporate good ideas in a different branch.
AH: Why coding now? RF: Had these ideas for ages, but noticed low hanging fruit. Remapping and submeso scale. Knew we could make significant time savings. Knew about these ages ago. Similar with tidal mixing. AH: Uses MOM timings? RF: It was slow, and looked at it and wondered about looping. MC: With changes what improvements? RF: 20-30% in each module. I run short cases, so data writing might dominate a bit. Will depend on the size of the model. Time spend on each tile proportional to mixed layer. MW: Shallow levels will be a big improvement? RF: yes. MW: Not  iterating where there aren’t values? RF: Yes. Two types of tests, check if entire tile can be topped, other times if a latitude can be stopped. RF: Did test of 1200 cpu job on OFAM grid too 30% off those routines.
AH: submeso is 10% of total ocean runtime.
RF: Starting a big run, good time to get it in.
MW; Sometimes said MOM was well balanced. Aggressively masks everything.
RF: Imbalance comes through the parameterisation code. KPP, Tidal mixing. Found another weird thing in the barotropic routines. Takes a lot of time. eta and pbot diagnose. No reason to diagnose the pressure at bottom on a u cell. Except if you’re writing the diagnostic. AH: standard for the code to check if diagnostic used before calculating? RF: Required for restart file. Check at restart stage and write it out that time. AH: don;’t restarts have to be field_table? RF: No
AH: If they don’t affect science can add to 0.1 at any time.
Ocean eta and pbot diagnose 10% of runtime.
AH: should we prioritise any changes. RF: just the ones I have put in. Others not so much. I’ll fix up the PR. Just got compiled and testing.

netCDF Parallel MPI IO

MW: Parallel IO stuff looking good and nearly done. Getting parallel IO without collation. Even restarts. A few masked cases where things look odd  with completely missing values.
MW: Fill value versus zero over land? If I do mppnccombine intelligently turns zero over land into missing values.
RF: When MOM sends diagnostics sends a mask with the call.
MW: Should land be zero or fill value? RF: should be fill. MC: What about restarts? RF: Used to have zero and then changed. Turned up in the density restarts.
AH: Performance?
MW: As fast as the number of disks. Can be subtle to configure. Have to balance the nodes with io_layout with ncpus on node. Negligible with 0.25 deg. Write speeds at about speed of lustre (half speed x number of disks).
PD: Fan of missing_value stuff. Parallel IO work from Dale.
MW: Rui will know about timing variance. Worried GFDL will find it slow and reject. Rui looked into compressed parallel IO. Interesting results. Reasonably fast. It’s half the speed of non-compressed. What is the serial (offline) compression time? No idea. AK: Is speed MB/s. Or twice as slow for total data file? MW: Twice as slow as the entire dataset.
MW: Currently uncompressed. Can then compress.RF: Need to work for regional output. MW: Do at FMS level. AH: Should test for regional output. RF: Regional output done by geographic rather than index. If by index would make it easier. MW: If you can get that for a test.

Actions

New:

  • Amend MOM5 governance doc (AH)
  • Feedback to RF PRs (MW+AH)
  • Check back on Paul’s PR (MW)

Existing:

  • Shared google doc on reproducibility strategy (AH)
  • Pull request for WOMBAT changes into MOM5 repo (MC, MW)
  • After FMS moved to submodule, incorporate MPI-IO changes into FMS (MW)
  • Incorporate WOMBAT into CM2.5 decadal prediction codebase and publish to Github (RF)
  • Move FMS to submodule of MOM5 github repo (MW)
  • Make a proper plan for model release — discuss at COSIMA meeting. Ask students/researchers what they need to get started with a model (MW and TWG)
  • Blog post around issues with high core count jobs and mxm mtl (NH)
  • Look into OpenDAP/THREDDS for use with MOM on raijin (AH, NH)
  • Add RF ocean bathymetry code to OceansAus repo (RF)
  • Add MPI barrier before ice halo updates timer to check if slow timing issues are just ice load imbalances that appear as longer times due to synchronisation (NH).
  • CICE and MATM need to output namelists for metadata crawling (AK)
  • Provide 1 deg RYF ACCESS-OM-1.0 config to MC (AK)

Technical Working Group Meeting, December 2018

Minutes

Date: 11th December 2018
Attendees:

  • Marshall Ward (MW) (Chair) NCI
  • Aidan Heerdegen (AH) and Andy M Hogg (AMH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Nic Hannah (NH) Double Precision

COSIMA Models

Profiling

MW: Been profiling CICE, score-p profiling doesn’t work. Been timing by time step. Anomalously long time spent at step 72. AH: could it be atmosphere being updated. JRA55 is 3 hourly. Not sure timestep. MW: Seem to have lost my logs. Not sure best way to handle it.

CM2 Harmonisation update

AH: Peter has been testing release candidate. Russ supplied a diag_table which just outputs fields for first 2 time steps which is really good for seeing code issues. Russ found some bugs introduced by me. A couple of logic errors with preprocessor flags and omission of a couple of lines that got lost in translation. Confident latest update has squashed all the bugs. MW: Not old bugs? AH: Did find some old issues. Russ found a stuffed iceberg file. RF: Not related, but is something they were using for CMIP6. AH: Did find some old bugs, had to emulate the lack of reproducibility from a the readsea salinity fix timing bug to be able to closely reproduce CM2 output. Put a flag in to do the wrong thing to do the same as theirs, will remove before merging. MW: I thought reds fix had been changed to be faster but not reproducible. RF: That’s right, but not issue. This has to do with timing. Aidan fixed it, but not compatible with what they are using. AH: Just need something that reproduces CM2 output.

Narrator: The new way of doing salt fix will reproduce over time steps, but is not bit reproducible with the old algorithm. Don’t see that effect in these tests.

AH: Peter has a test suite which is old CM2, and a copy which uses updated MOM. He compiles the new code manually and runs the two suites side by side. Both use Russ’ diag_table. Just find out which fields don’t match. Most are the same, few different, seem to be affected by the same issue. Once we’re good for a few time steps then maybe look at them after a few months RF: Once chaos starts, hard to say. As long as nothing gross happening. Unless there is something further on with coupling. AH: Yes, look after a month and check it looks close. MW: Not trying to be bit reproducible? AH: Just want to fix my bugs. RF: Make sure you’re getting the same forcing fields. Can see out in the open ocean hardly any change. Just noise. This means we’re close. Saw the outline of where the forcing field is supposed to be. The bug in the forcing field data showed up, which indicated the issue. AH: Once we’ve confirmed fixed, will merge PR and then move on to ESM.

MW: Will the CM2 code remain in step with the MOM5 code? RF: CSIRO Aspendale not doing much code development at the moment. AH: Peter is pulling directly from his GitHub repo, but once it is harmonised they will pull directly from the MOM5 repo. They will want to have a tag and pull from the tag. RF: Yes they will want frozen versions. AH: Should have some automated tested, if we find a bug, should be able to updated CM2 code and confirm doesn’t change important answers.

AH: Short answer: Lots of progress. I made lots of bugs and Russ found them. Thanks Russ. NH: Yes thanks Russ.

Model reproducibility and payu bug

NH: working on documentation, wiki, tech report and model paper. Like to do more. Wiki doc easier as a brain dump. Made sure ACCESS-OM2 Jenkins tests are passing. Takes time something always seem to go wrong. Six tests passing and useful. Repro test working and now reproducing across restarts. Wasn’t working due to 1. payu bug, 2. red sea fix and 3. compiling with repro.
NH: Doing 2 runs with and without that payu bug on 1 and 0.25 degree. Doing 4 years as individual 1 year submits. Make sure bug not too serious. The way the coupling field restarts are done not good. Ocean has to write out a restart for cice (o2i.nc). Copy of restart file missing. Had in the past. Refactor with libaccessom2 and change of payu model driver didn’t carry this over. Means every first forcing fields that the ice model gets at the beginning of a new submit for the first coupling step are from the beginning of the run, not the previous run. Ice model is getting the wrong forcing for the first 3 hours.
MW: Has it been fixed? All runs affected? AK: Yes fixed now. Scope which runs affected. Only since YATM? NH: Yes. If your run uses YATM it will have this problem. Around the time the bug introduced. Restructured how config.yaml organised. Created libaccessom2 driver, and bug came in at that point. MW: Used to have oasis driver that did that. NH: Restart repro test existed but failing for other reasons, not being kept up to date. If that test was passing and then started failing, then would have been noticed. Doing a post mortem to see if there is anything significant on a 5 year run. Gut feeling, just in the ice. RF: Will just be the SST that it sees. If running a month at a time significant. Yearly not so important. Also depends what was in the initial coupling field. NH: Initial field correct, probably January. RF: Didn’t get updated for changes to landmasks? NH: Land has been eliminated so not necessary. NH: Any run which is a multiple of 1 year, problem is smaller. AH: Quarter and 1 degree aren’t that affected, tenth most affected. NH: Could do 1 month 1 degree runs. AH: Good idea. Don’t forget about runspersub option, could do 50 in a single submit. MW: payu restart flag now works as well. Could be useful for testing reproducibility. NH: This could be a problem in other cases as well. Existing restart is based on a specific time. May be correct for the specific model it was created for. RF: Should be matched to initial condition, with correct fields. MW: This is a cold start? NH: Needs to be created each time based on start time of your forcing. AH: Write code into model to read in IC and write back out to coupling fields? NH: Something like that might be good.
AK: Bunch of fields SST, SSS, SS velocity, SS slope, frazil ice formation energy. RF: SST and SSS only ones not zero in a cold start. AK: Replace by initial condition for entire experiment NH: There is a single file in the ACCESS-OM2 input directory that all experiments use. NH: Could diff that against what it should have been. MC: That is cold start bug, not so important. Warm start bug fixed? NH: yes fixed in latest version version 0.11.2. AK: People aren’t using that? MW: No, because it was broken. Now fixed. AH: Arguably should delete payu versions with the known warm start bug. Or back port the fix? MW: Don’t have framework to back port fix. AH: How many versions affected? NH: Put a warning message/assert in that stops and doesn’t let it load. MW: happy to delete old versions. Some people use a specific payu versions. Easy to put warnings in module files. Can also delete old ones. Not a huge problem.
AH: figure out which payu versions affected. Make a decision based on that. MW: Only those with libaccessom2. AH: Don’t delete straight away. Turn off modules first. See if there are people affected. AK: Could be people not using access-om2. AH: yes, but can use new versions. Need to make sure people not using buggy code. AK: Possibly move to new space. AH: yes, but might not be necessary. MW: May be impossible to back port fixes. Driver might not be functional. No problem doing backports, not sure how.
AH/MW: Might not need to back port, should:
  1. Confirm payu/0.11.2 working correctly
  2. Set as default version
  3. Determine which payu versions affected
  4. Turn off affected modules in modulefile and issue message about bug, what module to load and to email climate_help if users still has issues
  5. When complain assess individual cases
  6. If necessary move payu module to non-app path
  7. Delete old versions?
2 week time frame.
MW: People shouldn’t be encouraged not to specify module versions.
MW: Make sure 0.11.2 working correctly. Works for NH and AH. AK a good test for it as running. AK: Not running at the moment. Can we use old mppnccombine with payu/0.11.2. AH: Yes. MW: Use whichever you want. AH: works better for 1 deg in any case.
MW: added a restart directory feature. run 0 uses the restart and reset counters back to zero. AK: Had been copying stuff. MW: I’ve been symlinking and other hideous things. AK: Documents what you did better. AH: Used to have problems with drivers trying to delete symlinks when cleaning up restart directories.
AH: Will finish manifest this week. Chatted with Marshall and reimplementing it a bit differently. Will make NH’s job a lot easier. Run config has all the files, just need to clone and run. NH: awesome.
NH: Want any post-mortem or checking on tenth model for the payu bug? Could do some short 1 month runs. AK: Not sure what we would do with the information. Diagnosis without treatment. Interesting from an academic viewpoint. Planning to do a longer re-run with other changes and will be fixed in that. Interesting to see a couple of months and see scope of issue. Is it negligible? Maybe tell people AH: Choose a worse case: Southern summer? NH: Ok, might do that.

OpenMPI

MW: Been using OpenMPI/3.0.3. Working well. Speeds same as 1.10. uses ucx by default. Turn off all flags, except error aggregate if you want. Can try 3.1.3, had some issues. Likely the version on the next machine.
AH: Test on Jenkins with new OpenMPI? MW: Good idea
MW confirmed that using hyperthreading option in payu is harmless (might even be on by default).

COSIMA Models

Bathymetry

RF: Wanted to get rid of Ob river? 1150 looks good. Need an inlet to keep runoff in correct place. See GitHub issue. Plot shows 0.25 degree cell size is cut off.
AMH: Need to get rid off the Ob. Russ’ plot at 1150m looks good, maybe smooth out corners. RF: Have to look at index space, straight edges, no inlet, things like that. Depth is minimum depth, 10m, a lot more shallow in actuality. AK: Only real reason to keep it is to have the runoff in the right place. Had to smooth to stop model crashing. Main reason to keep is to make sure runoff is mapped correctly. AMH: Where is runoff coming from? Take it too far up and might get remapped to the wrong embayment. Why I like the minimal change. It is stable. AK: Yes since Russ’ fix that stops salinity drop below zero with ice formation. AH: If your map had water at depth zero, as opposed to land, then can follow the water along until it is > 0. Say this is water, use for remapping but not for model. AK: Need a separate file? AH: Not necessarily. Remapping using it’s own logic anyway. AK: Remapping takes no account for topography. NH: Could make the distance function smarter, use a directional weight, something like AH suggested, or take into account topography. AK: Go downslope.
RF: Other problem was Southhampton Island. Just taking out inlet was sufficient. AMH: Keep Island separated from mainland? RF: Yes. Hasn’t been causing problems? AK: No. AMH: Will leave cells smaller than 1150m. AK: Yes, but not too bad. Also an abrupt change in spacing. RF: Yes tripolar grid has discontinuity. AH: Cut of at 1150m, what was it before? AK: 880m. All crashes I had with ice remap error were less than 1100m. Those can be eliminated with closing channels. AMH: Worried about Southampton. AK: Never had issues there. Will be getting new constraints. Had to put damping on Kara Strait, and had issues with seamount off tip of Severny. AMH: Ok, keep it at 1150m and see.
AK: In quarter degree Baffin Island is attached to Canadian mainland. Tenth has much more open water. A lot of it extremely shallow (less than 100m), so unlikely important for sea water transport, but likely important for ice transport. AMH: And therefore fresh water transport. AH: Who will do this? RF: Planning to do it today or tomorrow. AMH: Awesome, thanks.

Profiling

AMH: getting different numbers between IAF and RYF due to AK needing more ice time steps in IAF case. He can’t run with ndtd=2, so load imbalanced to cice. ntdt=2 with minimal. AK: Time difference is due to value of ndtd. Ruth still getting bad departure points with minimal. Reduces ocean time step for a single submit. I reduced ndtd instead. AMH: This has caused a load imbalance. Not the same as our optimisation that NH targeted. NH used ndtd=2 in optimisation. AK using 50% more time.
MW: What optimisation? AMH: When NH looking at load balancing. AK using 50% more time steps, and taking 50% more time.
NH: Now have a rebalanced tenth minimal with ndtd=3. With the bathymetry changes might not need it. AH: Hold off on that until AK can tell if we need it. AK: May still on occasion need to reduce time step every 5 or 10 years, preferable to ndtd=3. IAF variability means can’t guarantee it will work with every year.
MW: OASIS timing issue. Struggling to define main loop time. Looking at 1 deg, outputting time of every time step. Not literally useful due to overhead. AH: Give you scaling? MW: Not sure.
MW: timing between 170-200ms per step. Step 32 get a big number. 36s in one, 72s in the other. Is it just waiting? Doing IO? Maybe some sort of OASIS thing happening to bootstrap. Get infrequent huge time steps. Run again and don’t get them. Going to remove the largest timestep. Anyone know what is causing this?
NH: What are you profiling? MW: Just the coupling step. Reporting the coupling code.
MW: Does it do a lot of IO on that first coupling step? NH: Yes it does on the first step. What about CICE diagnostics? Are they printing to ice_diag.d. Should be consistent. If it goes away?
RF: CICE does IO through one PE, so does a global collective. MW: Could be IO and MPI collective issues. Not sure if this is legitimate timing or not?
NH: Not sure what the bigger picture is, but find targeting specific routines to look at load imbalance. NH: definitely look into CICE diagnostics.
MW: Timing so inconsistent. AH: Run a bunch of use the minimum. Turn off all diagnostics. AH: For the paper MOM scales well. Need to say something about CICE scaling. Doesn’t need to be the final word. MOM gives some leeway and these are the best configurations …
NH: Happy to help. Can do more fine grained stuff. Do some counting. MW: like score-p but it dies with CICE.

Grid scale noise

 RF: Chris Chapman problem with submeso scale stuff (see issue). There is a smoothing feature in submeso but says it doesn’t reproduce. Think I found a bug. Does smoothing of mixed layer. Possible to put mixed layer into rock with smoothing, doesn’t seem to be any check. Might get some others to look at it. If they agree we might be able to fix it and reduce the checkerboard. AK: This in MOM6? Also in MOM5? RF: There is a namelist parameter, says not to use because not repro, but because buggy. No reason it shouldn’t reproduce.
MW: Is this filtering a numerical mode? AK: KPP purely numerical, so adjacent columns can decouple. RF: Will point out code and see if people agree. AK: Get fixed and could be good to put in for next tenth degree run.