Ekman Streamfunction Paper Submitted

The global ocean overturning circulation is the planetary-scale movement of waters in the vertical and north-south directions. It is the principal mechanism by which the oceans absorb, sink, and redistribute heat and carbon from the atmosphere, thereby regulating Earth’s climate. Despite its importance, it is impossible to observe directly, and must be inferred from sparse and infrequent proxy measurements. The main upward branches of the overturning circulation are located in the Southern Ocean, where strong westerly winds upwell waters from below. Thus, changes in these westerly winds will lead to changes in the overturning circulation, and, subsequently, Earth’s climate.

In a recently submitted paper, we introduce a new tool that we call the Ekman streamfunction to analyse the change of the winds in a framework that is directly comparable with the overturning circulation. We test the Ekman streamfunction with model output from ACCESS-OM2-01 in which the overturning circulation is measured directly. We find throughout much of the Southern Ocean, the Ekman streamfunction provides a robust indicator of the strength and variability of the overturning circulation, with exceptionally high correlation. Our new tool provides a novel approach for reexamining existing datasets of winds measured from satellites, to infer recent changes in the overturning circulation.

“The Ekman Streamfunction: a wind-derived metric to quantify the Southern Ocean overturning circulation”; Stewart, Hogg, England, Waugh & Kiss, Submitted to Geophysical Research Letters.

Unreviewed submission available here: https://www.essoar.org/doi/abs/10.1002/essoar.10506547.1

Technical Working Group Meeting, December 2020

Minutes

Date: 9th December, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Andrew Kiss (AK) COSIMA ANU
  • Angus Gibson (AG) RSES ANU
  • Russ Fiedler (RF) CSIRO Hobart
  • Rui Yang (RY) NCI
  • Nic Hannah (NH) Double Precision
  • Peter Dobrohotoff (PD) CSIRO Aspendale

Testing with spack

NH: Testing spack. On lightly supported cluster. Installed WRF and all dependencies with 2 commands. Only system dependency was compiler and libc. Automatically detects compilers. Can give hints to find others. Tell it compiler to use for build. Can use system modules using configuration files. AH: Directly supported modules based on Lmod. Talked to some of the NCI guys about Lmod, as the raijin version of modules was so out of date. C modules has been updated, so they installed on gadi. Lmod has some nice features, like modules based on compiler toolchain. Avoids problem with Intel/GNU subdirectories that exist on gadi. NCI said they were hoping to support spack, by setting up these configs so users could spack build things. Didn’t happen, but would have been a very nice way to operate to help us.

NH: Primary use case in under supported system where can’t trust anything to work. Just want to get stuff working. Couldn’t find an MPI install using latest/correct compiler. gadi well maintained. See spack as a portability tool. Containment is great.

AH: Was particularly interested in concretisation, id of build, allows reproducibility of build and identification of all components.

NH: Rely on MPI configured for system. Not going to have our own MPI version. AH: Yes. Would be nice if someone like Dale made configs so we could use spack. Everything they think is important to control and configure they can do so. Probably not happy with people building their own SSL libraries. Thought it would improve NCI own processes around building software. Dale said he found the system a but fragile, too easy to break. When building for a large number of users they weren’t happy with that. Thought it was a great idea for NCI, to specify builds, and also easy to create libraries for all compiler toolchains programmatically.

AG: Haven’t tried recently.

Parallel compression of netCDF in MOM5

parallel_compression_mom5

RY: Continuation of previous PIO work, including compression as now supported by netCDF. Used FMS IO benchmark test_mpp_io to tune parameters. 174GB -> 74 GB with level 4 deflation. Tested two PE numbers. Tested two schemes ROMIO and OPMIO. v1.10.x lots of errors. v.12.x much better. Only had to change deflate_level in mpp_io_nml namelist, no source code changes.
RY: Best settings, for 720 PE, (48,15), best IO layout (24,15), and 1440PE (48,30) best IO layout 12,30. Non-compressed match chunk size with layout. Best time keep x contiguous when compression turned on. Memory access dominate, so layout continuous. Hence x-axis continuous.

RY: Stripe count affects non-compressed more than compressed. PIO doesn’t work perfectly with Lustre, fails with very large stripe count. With large file sizes (2TB) can be faster to write compressed IO due to less IO time.

  • Large measurement variability in IO intensive benchmark as affected by IO activity. Difficult to get stable benchmark.
  • Use HDF5 1.1.12.x, much more stable.
  • Use OMPIO for non-compressed PIO
  • Similar performance between OMPIO and ROMIO for compressed performance

RY: Early stage of work. Many compression libraries available. Here only used zlib. Other libraries will lead to smaller size and faster compress times. Can be used as external HDF filter. File created like this requires filter to be compiled into library.

NH: How big is measurement variability? RY: Can be very different, took shortest one. Sometimes double. TEST_MPP_IO is much more stable. Real case much less so.

NH: Experiencing similar variability with ACCESS model with CICE IO. Anything we can do? Buffering? RY: Can increase IO data size and see what happens. Thinking it is lustre file system. More stripe counters touches more lustre servers. Limit to performance as increase stripe counters, as increase start to get noise from system. NH: What are the defaults, and how do you set stripe counters? RY: Default is 1, which is terrible. Can set using MPI Hints, or use lfs_setstripe on a directory. Any file created in that directory will use that many stripe counters. OMPIO and ROMIO have different flags for setting hints. Set stripe is persistent between reboots. Use lfs_getstripe to check. AH: Needs to be set to appropriate value for all files written to that directory.

NH: Did you change MPI IO aggregators or aggregator buffer size. RY: Yes. Buffer size doesn’t matter too much. Aggregator does matter. Previous work based on raijin with 16 cores. Now have 48 cores, so experience doesn’t apply to gadi. Aggregator default is 1 per node for ROMIO. Increase aggregator, doesn’t change too much, doesn’t matter for gad. OMPIO can change aggregator, doesn’t change too much.

NH: Why deflate level 4? Tried any others? RY: 4 is default. 1 and 4 doesn’t change too much. Time doesn’t change too much either. Don’t use 5 or 6 unless good reason as big increase in compression time. 4 is good balance between performance and compression ratio.

NH: Using HDR 5 c1.12.x. With previous version of HDF, any performance differences? RY: No performance difference. More features. Just more stable with lustre. Using single stripe counter both work, as soon as increase stripe counter v1.10 crashes. Single stripe counter performance is bad. Built my own v1.12 didn’t have problem.

NH: Will look into using this for CICE5. AH: Won’t work with system HDF5 library?

AH: Special options for building HDF5 v1.12? RY: Only if you need to keep compatibly with v1.10. Didn’t have any issues myself, but apparently not always readable without adding this flag. Very new version of the library.

AH: Will this be installed centrally? RY: Send a request to NCI help. Best for request to come from users.

AH: Worried about the chunk shapes in the file. Best performance with contiguous chunks in one dimension, could lead to slow read access for along other dimensions. RY: If chunks too small number of metadata operations blow out. Very large chunks use more memory and parallel compression is not so efficient. So need best chunk layout. AH: Almost need a mask on optimisation heat map to optimise performance within a useable chunk size regime. RY: Haven’t done this. Parallel decompression is not new, but do need to think about balance between IO and memory operations.

RF: Chunk size 50 in vertical will make it very slow for 2D horizontal slices. A global map would require reading in the entire dataset. RY: For write not an issue, for read yes a big issue. If include z-direction in chunk layout optimisation would mean a large increase in parameter space.

AH: Optimisation based on performance from simpler benchmark. Numbers didn’t correlate that well with more complex benchmark due to being a much larger file. Would running the benchmark with a larger file change the layouts used for the real world test? RY: Always true that chunk size along x should be contiguous. Probably y chunk size would change with real world example. Trends are the same. Default chunk layout slices all 3 axes. Best performance is always better than default chunk layout.
AH: Larger core counts now around 10K cores. RY: Have to select correct io_layout. Restricts the number of PEs. AH: This is an order of magnitude larger. RY: Filesystem has limited number of IO server. This sets the maximum number of IO PEs. Should always keep number of IO server less than this.

ACCESS-OM-01 runs

NH: AK has been running 0.1 seeing a lot of variation in run time due to IO performance in CICE. More than half the submits are more than 100%worse than the best ones. Is this system variability we can’t do much about? Also all workers are also doing IO. Don’t have async IO, don’t have IO server. Looking at this with PIO. Have no parallelism in IO so any system problem affects our whole model pipeline. RY: Yes IO server will mean you can send IO and continue calculations. Dedicated PE for IO. UM has IO server. NH: Ok, maybe go down this path AH: Code changes in CICE? NH: Exists in PIO library. Doesn’t exist in fortran API for the version we’re using. Does exist for C code. On their roadmap for the next release. A simple change to INIT call and use IO servers for asynchronous IO. Currently uses a stride to tell it how many IO servers per compute. AH: Are CICE PEs aligned with nodes? Talked about shifting yam, any issues with CICE IO PEs sharing nodes with MOM. NH: Fastest option is every CPU doing it’s own IO. Using stride > 1 doesn’t improve IO time. RY: IO access a single server, doesn’t have to jump to different file system server. There is some overhead when touching multiple file system servers when using striping for example.

AH: Run time instability too large? AK: Variable but satisfactory. High core count for a week. 2 hours for 3 months. AH: Still 3 month submits? AK: Still need to sometimes drop time step. 200KSU/year. Was 190KSU/year, but also turned off 3D daily tracer output. AH: More SUs, not better throughput? AK: Was hitting walltime limits with 3D daily tracer output. Possibly would work to run 3 months/submit with lower core count without daily tracers.

AK: Queue time is negligible. 3 model years/day. Over double previous throughput. Variability of walltime is not too high 1.9-2.1 hours for 3 months. Like 10% variability.

AH: Any more crashes? Previously said 10-15% runs would error but could be resubmitted. AK: Bad node. Ran without a hitch over weekend. NH: x77 scratch still an issue? AK: Not sure. AH: Had issues, thought they were fixed, but still affected x77 and some other projects. Maybe some lustre issues? AK: Did claim it was fixed a number of times, but wasn’t.

 

Tripole seam issue in CICE

AH: Across tripole seam one of the velocity fields wasn’t in the right direction, caused weird flow. AK: Not a crash issue. Just shouldn’t happen, occurs occasionally. The velocity field isn’t affected, seen in some derived terms, or coupling terms. Do sometimes get excess shear along that line. RF: There is some inconsistencies with how some fields are being treated. Should come out ok. Heat fluxes slightly off, using wrong winds. They should be interpolated. What gets sent back to MOM is ok, aligned in the right spot. No anti-symmetry being broken. AK: Also true for CICE? RF: Yeah, winds are being done on u cells correctly. Don’t think CICE sees that. AH: If everything ok, why does it occur? RF: Some other term not being done correctly, either in CICE or MOM. Coupling looks ok. Some other term not being calculated correctly.

AH: How much has our version of CICE changed from the version CSIRO used for ACCESS-ESM-1.5 NH: Our ICE repo has full git history which includes the svn history. Either in the git history or in a file somewhere. Should be able to track everything. Can also do a diff. I don’t know what they’ve done, so can’t comment. Have added tons of stuff for libaccessom2. Have back-ported bug fixes they don’t have. We have newest version of CICE5 up to when development stopped which include bug fixes. As well as CICE6 back ports. AH: Can see you have started on top of Hailin’s changes. NH: They have an older version of CICE5, we have a newer version which includes some bug fixes which affect those older versions.

RF: Also auscom driver vs access driver. Used to be quite similar, ours has diverged a lot with NH work on libaccessom2. We do a lot smarter things with coupling, with orange peel segment thing. There is an apple and an orange. We use the orange. NH: Only CICE layout they use is slender. They don’t use special OASIS magic to suppler that. Definitely improves things a lot in quarter degree. Our quarter degree performance a lot better because of our layout. AH: The also have 1 degree UM, so broadly similar to a quarter degree ocean. NH: Will make a difference to efficiency. AH: Efficiency is probably a second order concern, just get running initially.

 

Improve init and termination time

AH: Congratulations to work to improve init and termination time. RF: Mostly NH work. I have just timed it. NH: PIO? RF: Mostly down to reading in restart fields on each processor. Knocked off a lot of time. A minute or so. PIO also helped out a lot. Pavel doing a lot of IO with CICE. Timed work with doing all netCDF definitions first and then the writing, taking 14s including to gather on to a single node and write restart file. The i2o.nc could be done easily with PIO. Also implemented same thing for MOM, haven’t submitted that. Taking 4s there. Gathering global fields is just bad. Causes crashes at the end of a run. There are two other files, cicemass and ustar do the same thing, but single file, single variable, so don’t need special treatment.

RF: Setting environment variable turns off UCX messages. Put into payu? Saves thousands of lines in output file.

COSIMA Linkage Project funded

The Australian Research Council (ARC) recently announced $1.1M of funding for a new 4-year COSIMA project. The new project is funded under the ARC’s Linkage Project scheme, and is supported by 4 industry partners: The Department of Defence, Bureau of Meteorology, Australian Antarctic Division and CSIRO. This funding will continue to support the Australian ocean and sea ice modelling community to develop and distribute open source model configurations.

The aims of the proposal are to:

  1. Configure, evaluate and publish the next-generation MOM6 ocean model and CICE6 sea ice model, culminating in a new, world-class Australian ocean-sea ice model: “ACCESS-OM3”;
  2. Advance Australian capacity to model the ocean’s biogeochemical cycles and surface waves, including the feedback between waves, sea ice, biogeochemistry and ocean circulation; and
  3. Build on the success of COSIMA to establish deep ties between Australia’s leading ocean-sea ice modelling institutions, while maintaining ACCESS-OM2 for ongoing research projects and operational products.

Work on the new project is expected to begin in 2021, initially focussing on the adoption of the MOM6 ocean model for regional applications. A schematic of the intended workflow can be found in the figure below.

As well as developing new model configurations, the new COSIMA project will have a stronger emphasis on developing tools for data analysis, data sharing and publication. The new project will start with a kick-off meeting in the first half of 2021 (details to be announced).

Data available: 0.1° 1958-2018 ACCESS-OM2 IAF runs (plus extension to 2023)

Announcement (updated 20 March 2023):

Over 180Tb of model output data from COSIMA’s ACCESS-OM2-01 0.1-degree global coupled ocean – sea ice model is now available for anyone to use (see conditions below). This consists of four consecutive 61-year (1958-2018) cycles, with the 4th cycle including BGC and extended to 2023. This is part of a suite of control experiments at different resolutions, listed here.

Data access

We recommend using the COSIMA Cookbook to access and analyse this data, which is all catalogued in the default cookbook database. A good place to start is the data explorer, which will give an overview of the data available in this experiment (and many others).

Alternatively, the data can be directly accessed at NCI, mostly from
/g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf*
and with some (see details below) from
/g/data/ik11/outputs/access-om2-01/01deg_jra55v140_iaf_cycle3 and
/g/data/ik11/outputs/access-om2-01/01deg_jra55v140_iaf_cycle4_jra55v150_extension. You can find all the relevant ocean (but not sea ice) output files based on their names – e.g. this lists all the 3d daily-mean conservative temperature data in the first 0.1° IAF cycle: ls /g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf/output*/ocean/*-3d-temp-1-daily-mean-*.nc; the filenames also tell you the ending date.

You will need to be a member of the cj50 and ik11 groups to access this data directly or via the cookbook – apply at https://my.nci.org.au/mancini/project-search if needed.

The cj50 subset of the data (149TB) can be downloaded from here for those not on NCI.

Overview of experiment

The first cycle (01deg_jra55v140_iaf) was run under interannually-varying JRA55-do v1.4.0 forcing from 1 Jan 1958 to 31 Dec 2018, starting from rest with World Ocean Atlas 2013 v2 climatological temperature and salinity. The run configuration history is in the 01deg_jra55v140_iaf branch in the 01deg_jra55_iaf repository. It is based on that used for Kiss et al. (2020) but has many improvements to the forcing, initial conditions, parameters and code which will be documented soon. Summary details of each submitted run are tabulated (and searchable) here.

The second cycle (01deg_jra55v140_iaf_cycle2) continues from the end of the first cycle, with an identical configuration except that its initial condition was the final ocean and sea ice state of the first cycle, and some differences in the output variables. The run configuration history is in the 01deg_jra55v140_iaf_cycle2 branch and summary details of each submitted run are here.

Similarly, the third cycle (01deg_jra55v140_iaf_cycle3) continues from the end of cycle 2, with different output variables. The run configuration history is in the 01deg_jra55v140_iaf_cycle3 branch and summary details of each submitted run are here.

The fourth cycle (01deg_jra55v140_iaf_cycle4) and its extension are the only runs to contain biogeochemistry; this is mainly in the ocean, but also coupled to sea ice algae and nutrient. Cycle 4 continues from the end of cycle 3, with different output variables. Oxygen was initialised at 1 Jan 1979, and the remaining BGC tracers were initialised at 1 Jan 1958. BGC tracers have no effect on the physical state, and oxygen has no effect on other BGC tracers. The run configuration history is in the 01deg_jra55v140_iaf_cycle4 branch and summary details of each submitted run are here.

01deg_jra55v140_iaf_cycle4_jra55v150_extension extends cycle 4 (including BGC) from 1 Jan 2019 to the end of 2023, forced by JRA55-do v1.5.0 (2019 only) and v1.5.0.1 (1 Jan 2020 onwards) instead of v1.4.0. Diagnostics are the same as the end of cycle 4. The run configuration history is in the 01deg_jra55v140_iaf_cycle4_jra55v150_extension branch and summary details of each submitted run are here.

Further details on these runs are given in
/g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf*/metadata.yaml and
/g/data/ik11/outputs/access-om2-01/01deg_jra55v140_iaf_cycle4_jra55v150_extension/metadata.yaml.

There are many outputs available for the entirety of all cycles with additional outputs available only in particular cycles or years (see below for details).
MOM5 ocean model outputs are saved under self-explanatory filenames in
/g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf*/output*/ocean/*.nc
and CICE5 sea ice model outputs are in
/g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf*/output*/ice/OUTPUT/*.nc
(if there are too many files to list with ls, narrow it down, e.g. by including the year, e.g. *2000*.nc)

Annual restarts (on 1 Jan each year) are also available at
/g/data/ik11/restarts/access-om2-01/01deg_jra55v140_iaf*/restart*
for anyone who may wish to re-run a segment with different diagnostics or branch off a perturbation experiment.

Conditions of use:
We request that users of this or other COSIMA model code or output data:

    1. consider citing Kiss et al. (2020) [doi.org/10.5194/gmd-13-401-2020]
    2. include an acknowledgement such as the following:
      The authors thank the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA; www.cosima.org.au), for making the ACCESS-OM2 suite of models available at github.com/COSIMA/access-om2. Model runs were undertaken with the assistance of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government.
    3. let us know of any publications which use these models or data so we can add them to our list.

Details of model outputs available

Notes:

  • You may find this partial list of diagnostics useful for decoding the MOM diagnostic names.
  • temp is conservative temperature, so surface_temp,  temp_surface_ave and bottom_temp are also conservative temperature, rather than the potential temperature specified in the OMIP protocol (Griffies et al., 2016) – see this discussion. If you need potential temperature, use pot_temp or surface_pot_temp.

⚠️ Errata:

Cycle 1 (66Tb): /g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf

  • 1 Jan 1958 to 31 Dec 2018
    • MOM ocean data
      • Daily mean 2d bottom_temp, frazil_3d_int_z, mld, pme_river, sea_level, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, surface_salt, surface_temp
      • Monthly mean 3d age_global, buoyfreq2_wt, diff_cbt_t, dzt, pot_rho_0, pot_rho_2, pot_temp, salt, temp_xflux_adv, temp_yflux_adv, temp, tx_trans, ty_trans_nrho_submeso, ty_trans_rho, ty_trans_submeso, ty_trans, u, v, vert_pv, wt
      • Monthly mean 2d bmf_u, bmf_v, ekman_we, eta_nonbouss, evap_heat, evap, fprec_melt_heat, fprec, frazil_3d_int_z, lprec, lw_heat, melt, mh_flux, mld, net_sfc_heating, pbot_t, pme_net, pme_river, river, runoff, sea_level_sq, sea_level, sens_heat, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, sfc_salt_flux_coupler, sfc_salt_flux_ice, sfc_salt_flux_restore, surface_salt, surface_temp, swflx, tau_x, tau_y, temp_int_rhodz, temp_xflux_adv_int_z, temp_yflux_adv_int_z, tx_trans_int_z, wfiform, wfimelt
      • Monthly mean squared 3d u, v
      • Monthly max 2d mld
      • Monthly min 2d surface_temp
      • Daily snapshot scalar eta_global, ke_tot, pe_tot, rhoave, salt_global_ave, salt_surface_ave, temp_global_ave, temp_surface_ave, total_net_sfc_heating, total_ocean_evap_heat, total_ocean_evap, total_ocean_fprec_melt_heat, total_ocean_fprec, total_ocean_heat, total_ocean_hflux_coupler, total_ocean_hflux_evap, total_ocean_hflux_prec, total_ocean_lprec, total_ocean_lw_heat, total_ocean_melt, total_ocean_mh_flux, total_ocean_pme_river, total_ocean_river_heat, total_ocean_river, total_ocean_runoff_heat, total_ocean_runoff, total_ocean_salt, total_ocean_sens_heat, total_ocean_sfc_salt_flux_coupler, total_ocean_swflx_vis, total_ocean_swflx
    • CICE sea ice data
      • Daily mean 2d aice, congel, dvidtd, dvidtt, frazil, frzmlt, hi, hs, snoice, uvel, vvel
      • Monthly mean 2d aice, alvl, ardg, congel, daidtd, daidtt, divu, dvidtd, dvidtt, flatn_ai, fmeltt_ai, frazil, frzmlt, fsalt, fsalt_ai, hi, hs, iage, opening, shear, snoice, strairx, strairy, strength, tsfc, uvel, vvel
  • 1 Jan 1987 to 31 Dec 2018 only
    • MOM ocean data
      • monthly mean 3d bih_fric_u, bih_fric_v, u_dot_grad_vert_pv
      • daily mean 3d salt, temp, u, v, wt
    • CICE sea ice data
      • daily mean 2d aicen, vicen
  • 1 Jan 2012 to 31 Dec 2018 only
    • MOM ocean data
      • monthly snapshot 2d sea_level
      • monthly snapshot 3d salt, temp, u, v, vert_pv and vorticity_z

Cycle 2 (21Tb): /g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf_cycle2

  • 1 Jan 1958 to 31 Dec 2018
    • MOM ocean data
      • Daily mean 2d bottom_temp, frazil_3d_int_z, mld, pme_river, sea_level, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, surface_salt, surface_temp
      • Monthly mean 3d age_global, bih_fric_u, bih_fric_v, buoyfreq2_wt, diff_cbt_t, dzt, pot_rho_0, pot_rho_2, pot_temp, salt, temp_xflux_adv, temp_yflux_adv, temp, tx_trans, ty_trans_nrho_submeso, ty_trans_rho, ty_trans_submeso, ty_trans, u_dot_grad_vert_pv, u, v, vert_pv, wt
      • Monthly mean 2d bmf_u, bmf_v, ekman_we, eta_nonbouss, evap_heat, evap, fprec_melt_heat, fprec, frazil_3d_int_z, lprec, lw_heat, melt, mh_flux, mld, net_sfc_heating, pbot_t, pme_net, pme_river, river, runoff, sea_level_sq, sea_level, sens_heat, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, sfc_salt_flux_coupler, sfc_salt_flux_ice, sfc_salt_flux_restore, surface_salt, surface_temp, swflx, tau_x, tau_y, temp_int_rhodz, temp_xflux_adv_int_z, temp_yflux_adv_int_z, tx_trans_int_z, wfiform, wfimelt
      • Monthly mean squared 3d u, v
      • Monthly max 2d mld
      • Monthly min 2d surface_temp
      • Daily snapshot scalar eta_global, ke_tot, pe_tot, rhoave, salt_global_ave, salt_surface_ave, temp_global_ave, temp_surface_ave, total_net_sfc_heating, total_ocean_evap_heat, total_ocean_evap, total_ocean_fprec_melt_heat, total_ocean_fprec, total_ocean_heat, total_ocean_hflux_coupler, total_ocean_hflux_evap, total_ocean_hflux_prec, total_ocean_lprec, total_ocean_lw_heat, total_ocean_melt, total_ocean_mh_flux, total_ocean_pme_river, total_ocean_river_heat, total_ocean_river, total_ocean_runoff_heat, total_ocean_runoff, total_ocean_salt, total_ocean_sens_heat, total_ocean_sfc_salt_flux_coupler, total_ocean_swflx_vis, total_ocean_swflx
    • CICE sea ice data
      • Daily mean 2d aice, congel, dvidtd, dvidtt, frazil, frzmlt, hi, hs, snoice, uvel, vvel
      • Monthly mean 2d aice, aicen, alvl, ardg, congel, daidtd, daidtt, divu, dvidtd, dvidtt, flatn_ai, fmeltt_ai, frazil, frzmlt, fsalt, fsalt_ai, hi, hs, iage, opening, shear, snoice, strairx, strairy, strength, tsfc, uvel, vvel, vicen
  • 1 April 1971 to 31 Dec 2018 only
    • CICE sea ice data
      • Daily mean 2d fcondtop_ai, fsurf_ai, meltb, melts, meltt, daidtd, daidtt
      • Monthly mean 2d fcondtop_ai, fsurf_ai, meltb, melts, meltt, fresh, dvirdgdt
  • 1 April 1989 to 31 Dec 2018 only
    • CICE sea ice data
      • Daily mean 2d aicen, vicen
  • 1 Oct 1989 to 31 Dec 2018 only
    • MOM ocean data
      • Daily max 2d surface_temp, bottom_temp, sea_level
      • Daily min 2d surface_temp
  • 1 April 1990 to 31 Dec 2018 only
    • MOM ocean data
      • Daily mean 2d usurf, vsurf
  • 1 January 2014 to 31 Dec 2018 only
    • MOM ocean data
      • Daily mean, min, max 2d surface_pot_temp
      • Monthly mean, min 2d surface_pot_temp
    • CICE sea ice data
      • Daily mean 2d sinz, tinz, divu
      • Monthly mean 2d sinz, tinz, strocnx, strocny

Cycle 3 (24 + 25 = 51Tb): mostly in /g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf_cycle3 but with some (marked in italics) in /g/data/ik11/outputs/access-om2-01/01deg_jra55v140_iaf_cycle3

  • 1 Jan 1958 to 31 Dec 2018
    • MOM ocean data
      • Daily mean 3d salt, temp, uhrho_et, vhrho_nt (all but temp are at reduced precision and restricted to south of 60S)
      • Daily mean 2d bottom_temp, frazil_3d_int_z, mld, pme_river, sea_level, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, surface_pot_temp, surface_salt, usurf, vsurf
      • Monthly mean 3d age_global, buoyfreq2_wt, diff_cbt_t, dzt, passive_adelie, passive_prydz, passive_ross, passive_weddell, pot_rho_0, pot_rho_2, pot_temp, salt_xflux_adv, salt_yflux_adv, salt, temp_xflux_adv, temp_yflux_adv, temp, tx_trans_rho, tx_trans, ty_trans_nrho_submeso, ty_trans_rho, ty_trans_submeso, ty_trans, u, v, vert_pv, wt
      • Monthly mean 2d bmf_u, bmf_v, ekman_we, eta_nonbouss, evap_heat, evap, fprec_melt_heat, fprec, frazil_3d_int_z, lprec, lw_heat, melt, mh_flux, mld, net_sfc_heating, pbot_t, pme_net, pme_river, river, runoff, sea_level_sq, sea_level, sens_heat, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, sfc_salt_flux_coupler, sfc_salt_flux_ice, sfc_salt_flux_restore, surface_pot_temp, surface_salt, swflx, tau_x, tau_y, temp_int_rhodz, temp_xflux_adv_int_z, temp_yflux_adv_int_z, tx_trans_int_z, ty_trans_int_z, wfiform, wfimelt
      • Monthly mean squared 3d u, v
      • Daily max 2d bottom_temp, sea_level, surface_pot_temp
      • Daily min 2d surface_pot_temp
      • Monthly max 2d mld
      • Monthly min 2d surface_pot_temp
      • Daily snapshot scalar eta_global, ke_tot, pe_tot, rhoave, salt_global_ave, salt_surface_ave, temp_global_ave, temp_surface_ave, total_net_sfc_heating, total_ocean_evap_heat, total_ocean_evap, total_ocean_fprec_melt_heat, total_ocean_fprec, total_ocean_heat, total_ocean_hflux_coupler, total_ocean_hflux_evap, total_ocean_hflux_prec, total_ocean_lprec, total_ocean_lw_heat, total_ocean_melt, total_ocean_mh_flux, total_ocean_pme_river, total_ocean_river_heat, total_ocean_river, total_ocean_runoff_heat, total_ocean_runoff, total_ocean_salt, total_ocean_sens_heat, total_ocean_sfc_salt_flux_coupler, total_ocean_swflx_vis, total_ocean_swflx
    • CICE sea ice data
      • Daily mean 2d aice, congel, daidtd, daidtt, divu, dvidtd, dvidtt, fcondtop_ai, frazil, frzmlt, fsurf_ai, hi, hs, meltb, melts, meltt, sinz, snoice, tinz, uvel, vvel
      • Monthly mean 2d aice, aicen, alvl, ardg, congel, daidtd, daidtt, divu, dvidtd, dvidtt, dvirdgdt, fcondtop_ai, flatn_ai, fmeltt_ai, frazil, fresh, frzmlt, fsalt, fsalt_ai, fsurf_ai, hi, hs, iage, meltb, melts, meltt, opening, shear, sinz, snoice, strairx, strairy, strength, strocnx, strocny, tinz, tsfc, uvel, vvel, vicen
  • 1 Jan 1959 to 31 Mar 1963 only
    • MOM ocean data
      • Daily mean 3d passive_adelie, passive_prydz, passive_ross, passive_weddell
  • 1 Jan 2005 to 31 Dec 2018 only
    • MOM ocean data
      • Monthly mean 3d salt_xflux_adv, salt_yflux_adv
  • 1 July 2009 to 31 Dec 2018 only
    • MOM ocean data
      • Monthly mean 3d tx_trans_rho

Cycle 4 (38Tb): /g/data/cj50/access-om2/raw-output/access-om2-01/01deg_jra55v140_iaf_cycle4

Includes coupled ocean and sea ice BGC. Note: 2d and 3d ocean BGC data has only 2 – 4 decimal digits of precision.

  • 1 Jan 1958 to 31 Dec 2018
    • MOM ocean physical data
      • Daily mean 2d bottom_temp, frazil_3d_int_z, mld, pme_river, sea_level, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, surface_pot_temp, surface_salt, usurf, vsurf
      • Monthly mean 3d age_global, buoyfreq2_wt, diff_cbt_t, dzt, pot_rho_0, pot_rho_2, pot_temp, salt_xflux_adv, salt_yflux_adv, salt, temp_xflux_adv, temp_yflux_adv, temp, tx_trans_rho, tx_trans, ty_trans_nrho_submeso, ty_trans_rho, ty_trans_submeso, ty_trans, u, v, vert_pv, wt
      • Monthly mean 2d bmf_u, bmf_v, ekman_we, eta_nonbouss, evap_heat, evap, fprec_melt_heat, fprec, frazil_3d_int_z, lprec, lw_heat, melt, mh_flux, mld, net_sfc_heating, pbot_t, pme_net, pme_river, river, runoff, sea_level_sq, sea_level, sens_heat, sfc_hflux_coupler, sfc_hflux_from_runoff, sfc_hflux_pme, sfc_salt_flux_coupler, sfc_salt_flux_ice, sfc_salt_flux_restore, surface_pot_temp, surface_salt, swflx, tau_x, tau_y, temp_int_rhodz, temp_xflux_adv_int_z, temp_yflux_adv_int_z, tx_trans_int_z, ty_trans_int_z, wfiform, wfimelt
      • Monthly mean squared 3d u, v
      • Monthly max 2d mld
      • Monthly min 2d surface_pot_temp
      • Daily max 2d bottom_temp, sea_level, surface_pot_temp
      • Daily min 2d surface_pot_temp
      • Daily snapshot scalar eta_global, ke_tot, pe_tot, rhoave, salt_global_ave, salt_surface_ave, temp_global_ave, temp_surface_ave, total_net_sfc_heating, total_ocean_evap_heat, total_ocean_evap, total_ocean_fprec_melt_heat, total_ocean_fprec, total_ocean_heat, total_ocean_hflux_coupler, total_ocean_hflux_evap, total_ocean_hflux_prec, total_ocean_lprec, total_ocean_lw_heat, total_ocean_melt, total_ocean_mh_flux, total_ocean_pme_river, total_ocean_river_heat, total_ocean_river, total_ocean_runoff_heat, total_ocean_runoff, total_ocean_salt, total_ocean_sens_heat, total_ocean_sfc_salt_flux_coupler, total_ocean_swflx_vis, total_ocean_swflx
    • WOMBAT ocean BGC data
      • Monthly mean 3d adic, alk, caco3, det, dic, fe, no3, o2, phy, zoo
      • Monthly mean 2d npp2d, pprod_gross_2d, stf03, stf07, stf09, wdet100
      • Daily snapshot scalar total_aco2_flux, total_co2_flux
    • CICE sea ice data
      • Daily mean 2d aice, congel, daidtd, daidtt, divu, dvidtd, dvidtt, fcondtop_ai, frazil, frzmlt, fsurf_ai, fswthru_ai, hi, hs, meltb, melts, meltt, snoice, uvel, vvel
      • Monthly mean 2d aice, aicen, alidf_ai, alidr_ai, alvdf_ai, alvdr_ai, alvl, ardg, bgc_n_sk, bgc_nit_ml, bgc_nit_sk, congel, daidtd, daidtt, divu, dvidtd, dvidtt, dvirdgdt, fcondtop_ai, flatn_ai, fmeltt_ai, fn_ai, fno_ai, frazil, fresh, frzmlt, fsalt, fsalt_ai, fsurf_ai, fswthru_ai, fswup, hi, hs, iage, meltb, melts, meltt, opening, ppnet, shear, snoice, strairx, strairy, strength, strocnx, strocny, tsfc, uvel, vicen, vvel
  • 1 Jan 1958 to 31 Oct 1959 and 1 Jan 2014 to 31 Dec 2016 only
    • CICE sea ice data
      • Daily mean 2d sinz, tinz
      • Monthly mean 2d sinz, tinz
  • 1 April 1975 to 31 Dec 2018 only
    • CICE sea ice data
      • Monthly mean 2d meltl
  • 1 January 1979 to 31 Dec 2018 only
    • WOMBAT ocean BGC data
      • Daily mean 3d, sampled every 5 days (but a possible jump 1 Jan 2016) adic, dic, fe, no3, o2, phy
      • Daily mean 2d adic_int100, adic_intmld, det_int100, det_intmld, dic_int100, dic_intmld, fe_int100, fe_intmld, no3_int100, no3_intmld, npp_int100, npp_intmld, npp1, npp2d, o2_int100, o2_intmld, paco2, pco2, phy_int100, phy_intmld, pprod_gross_2d, pprod_gross_int100, pprod_gross_intmld, radbio_int100, radbio_intmld, radbio1, stf03, stf07, stf09, surface_adic, surface_alk, surface_caco3, surface_det, surface_dic, surface_fe, surface_no3, surface_o2, surface_phy, surface_zoo, wdet100
      • Monthly mean 3d adic_xflux_adv, adic_yflux_adv, adic_zflux_adv, caco3_xflux_adv, caco3_yflux_adv, caco3_zflux_adv, det_xflux_adv, det_yflux_adv, det_zflux_adv, dic_xflux_adv, dic_yflux_adv, dic_zflux_adv, fe_xflux_adv, fe_yflux_adv, fe_zflux_adv, no3_xflux_adv, no3_yflux_adv, no3_zflux_adv, npp3d, o2_xflux_adv, o2_yflux_adv, o2_zflux_adv, pprod_gross, radbio3d, src01, src03, src05, src06, src07, src09, src10
  • 1 January 1987 to 31 Dec 2018 only
    • CICE sea ice data
      • Daily mean 2d albsni, fhocn_ai, fswabs_ai, dardg2dt, bgc_n_sk, bgc_nit_sk, ppnet
      • Monthly mean 2d albsni, fhocn_ai, fswabs_ai, dardg2dt
  • 1 Jan 2014 to 31 Dec 2016 only
    • MOM ocean physical data
      • 6-hourly mean 2d mld, surface_pot_temp, surface_salt
    • WOMBAT ocean BGC data
      • 6-hourly mean 2d radbio1, surface_fe, surface_no3, surface_o2, surface_phy
    • CICE sea ice data
      • 6-hourly mean 2d aice
      • Daily mean 2d alidf_ai, alidr_ai, alvdf_ai, alvdr_ai, fswup
  • 1 Jan 2016 to 31 Dec 2016 only
    • CICE sea ice data
      • 3-hourly mean 2d divu, shear, uvel, vvel
      • Daily mean 2d aicen, vicen

Cycle 4 2019-2023 extension, using JRA55-do v1.5.0 for 2019, and JRA55-do v1.5.0.1 to the end of 2023 (6.6Tb): /g/data/ik11/outputs/access-om2-01/01deg_jra55v140_iaf_cycle4_jra55v150_extension. Outputs are the same as the end of cycle 4.

COSIMA Model Output Collection

An increasingly important aspect of model simulations is to be able to share our data. Over the last few years we have been working on methods to routinely publish our most important simulations. This publication process is designed to allow any users, worldwide, to be able to pick up our model output and test hypotheses against our results. It will also allow journal publications to be able to cite our model output.

Currently we have  5 different datasets within the headline COSIMA Model Output Collection, which can be found here:

 http://dx.doi.org/10.4225/41/5a2dc8543105a

For users with NCI access this data is housed under the cj50 project.

We are planning to add new datasets in the coming months.

Technical Working Group Meeting, March 2020

Minutes

Date: 18th March, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU
  • Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

Scalability of ACCESS-OM2 on gadi

(Paul’s report is attached at the end)

PL: Looking at scaling. Started with ACCESS-OM2, but went to testing MOM5 directly with MOM5-SIS. Using POM25, global 0.25 model with NYF forcing. The model MW developed for testing scaling prior to ACCESS-OM2. Had to add specify min_thickness in ocean_topog_nml.

PL: Tested the scaling of 960/1920/3840/7680/15360, with no masking. Scales well up to some point between 7680 and 15360.

PL: Tested effect of vectorising options (AVX2/AVX512/AVX512-REPRO). Found no difference in runtime with 15360 cores. MW: Probably communication bound at the CPU count. Repro did not change time.

MW: Never seen significant speed up from vectorisation. Typically only a few percent improvement. Code is RAM bound, so cannot provide enough data to make use of vectorisation. Still worth working toward a point where we can take advantage of vectoeisatio.

PL: Had one “slow” run outlier out of 20 runs. Ran 20% slower. Ran on different nodes to other jobs, not sure if that is significant. MW: IO can cause that. AH: Andy Hogg also had some slow jobs due to a bad node. AK: Job was 20x slower. Also RYF runs become consistently slower a few weeks ago. MW: OpenMPI can prepend timestamps in front of output, can help to identify issues.

PL: Getting some segfaults in ompi_request_wait_completion, caused by pmpi_wait and pmpi_bcast. Both called from the coupler. NH: Could be a bad bit of memory in the buffer, and if it tries to copy it can segfault. PL: Thinking to run again using valgrind, but would require compiling own version of valgrind wrapper for OpenMPI 4.0.2. Would be easier to Intel MPI, but no-one else has use this. Saw some cases similar when searching which were associated with UCX, but sufficiently different to not be sure. These issues are with highest core count. MW: Often see a lot of problems at high core counts. NH: Finding bugs can be a never ending bug. Use time wisely to fix bugs that affect people. MW: Quarter degree at 15K cores would have very small tile sizes. Could be the source of the issue. AH: This is not a configuration that we would use, so it is not worth spending time chasing bugs.

PL: Next testing target is 0.1 degree, but not sure which configuration and forcing data to use. Will not use MOM5-SIS, but will use ACCESS-OM2 for direct comparison purposes. AK: Configurations used in the model description paper have not been ported to gadi. Moving on to a new iteration. Andy Hogg is running a configuration that is quite similar, but moving to new configurations with updated software and forcing. Those are not quite ready.

PL: Need a starting configuration for testing. Want to confine to scalability testing and compiler flags. NH: ACCESS-OM2 is setup to be well balanced for particular configurations. Can’t just double CPUs on all models as load imbalance between submodels will dominate any other performance changes. Makes it a problematic config for clean configurations for things like compiler flags. MW: Useful approach was to check scalability of sub-model components independently. Required careful definition of timers to strategically ignore coupling time. MOM was easy, CICE was more difficult, but work with Nic’s timers helped a lot. Try to time the bits of code that are doing computation and separate from code that waits on other parts. Coupled model is a real challenge to test. Figure out what timers we used and trust those. Can reverse engineer from my old scripts.

PL: Should do MOM-SIS scalability work? MW: Easier task, and some lessons can be learned, but runtime will not match between MOM-SIS and ACCESS-OM2. Would be more of a practice run. PL: Maybe getting out of scope. Would need 0.1 MOM-SIS config. RY: Yes we have that one. If PL wanted to run ACCESS-OM2-01 is there a configuration available? AK: Andy Hogg’s currently running configuration would work. PL: Next quarter need to free up time to do other things.

MW: Might be valuable to get some score-p or similar numbers on current production model. Useful to have a record of those timings to share. Scaling test might be too much, but a profile/timing test is more tractable. RY: Any issues with score-p? Overhead? MW: Typical, 10-20%, so skews numbers but get in-depth view. Can do it one sub-model at a time. Had to hack a lot scripts, and get NH to rewrite some code to get it to work. score-p is always done at compile time. Doesn’t affect payu. Try building MOM-SIS with score-p, then try MOM within ACCESS-OM2. Then move on to CICE and maybe libaccessom2. PL: Build script does include some score-p hooks. MW: Even without score-p MOM has very good internal timers. Not getting per-rank times. score-p is great for measuring load imbalance. AH: payu has a repeat option, which repeats the same time, which removes variability due to forcing. Need to think about what time you want to repeat as far as season. AK: CICE has idealised initial ice, evolves rapidly. MW: My earlier profile runs had no ice, which affects performance. MW: Not sure it is huge, maybe 10-20%, but not huge.

MW: Overall surprised at lack of any speed up with vectorisation, and lack of slow-down with repro. PL: Will verify those numbers with 960 core config.

AH: Surprised how well it scaled. Did it scale that well on raijin? MW: The performance scaling elbow did show up lower. AH: 3x more processors per node has an effect? MW: Yes, big part of it. AH: 0.1 scaled well on raijin, so should scale better on gadi. 1/30th should scale well. Only bottleneck will be if the library can handle that many ranks.

NH: If repro flags don’t change performance that is interesting. Seem to regularly have a “what trade off does repro flags have?”, would be good to avoid. MW: Probably best to have an automated pipeline calculating these numbers. NH: People have an issue with fp0 flag. MW: Shouldn’t affect performance. NH: Make sure fp0 is in there. MW: Agree 100%.

ACCESS-OM2 update

AH: Do we have a gadi compatible master branch on gadi? AK: No, not currently. NH: At a previous TWG meeting I self-assigned getting master gadi compatible. Merged all gadi-transition branches and tested, seemed to be working ok. Subsequent meeting AK said there were other changes required, so stopped at that point. gadi-transition branches still exist, but much has already been merged and tested on a couple of configurations. Have since moved to working on other things.

NH: Close if AK has all the things he wants into gadi-transition branch. Previous merge didn’t include all the things AK wanted in there. Happy to spend more time on that after finishing JRA55 v1.4 stuff.

JRA55-do v1.4 update

NH: Made code changes in all the models, but have not checked existing experiments are unchanged with modified code.

NH: v1.4 has a new coupling field, ice calving. Passing this through to CICE as a separate field. In CICE split into two fields, liquid water flux and a heat flux. MOM in ACCESS-CM2 already handles both these fields. Just had to change preprocessor flags to make it work for ACCESS-OM2 as well.

NH: Lots of options. Possible to join liquid and solid ice at atmosphere and becomes the same as we have now. Can join in CICE and have a water flux but not a heat flux.

Strange MOM6 error

AH: A quick update with Navid’s error. Made a little mpi4python script to run before payu to check status of nodes, and all but root node had a stale version of the work directory. Like it hadn’t been archived. Link to executable was gone, but everything else was there. Reported to NCI, Ben Menadue does not know why this is happening. Also tried a delay option between runs and this helped somewhat, but also had some strange comms errors trying to connect to exec nodes. Will next try turning off all input/output can find in case it is a file lock error. Have been told Lustre cannot be in this state.

MW: In old driver do a lot of moving directories from work to archive, and then relabelling. Is it still moving directories around to archive them? Maybe replace with hard copy of directory to archive. MOM6 driver is the MOM5 driver, so maybe all old drivers are doing this. Definitely worth understanding, but a quick fix to copy rather than move.

NH: Filesystem and symbolic links might be an issue MW: Maybe symbolic links are an issue on these mounted filesystems. AH: There was a suggestion it might be because it was running on home which is NFS mounted, but that wasn’t the problem. MW: Often with raijin you just got the same nodes back when you resubmit, so maybe some sort of smart caching.

 

Scalability of ACCESS-OM2 on Gadi – Paul Leopardi 18 March 2020

 

 

Technical Working Group Meeting, February 2020

Minutes

Date: 27th February, 2020
Attendees:
  • Aidan Heerdegen (AH) CLEX ANU, Angus Gibson (AG) ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Rui Yang (RY), Paul Leopardi (PL) NCI
  • Nic Hannah (NH) Double Precision
  • Marshall Ward (MW) GFDL

New installed payu version

Version 1.0.7 is now installed in conda/analysis3-20.01 (analysis3-unstable

AH: payu is now 100% gadi compatible. Default cpus/node is now 48 and memory 192GB/node. Python interpreter, short path and manifests are scanned to automatically determined from model config and manifests. Using qsub_flags to manually specify storage flags no longer works, as automatically determined storage flag option is appended and the manually specified one no longer works.

RF: Paul Sandery having issues getting 0.1 deg model working. [AH: turns out it was a typo in config.yam]

AH: No need for the number of cpus in a payu job to be divisible by the number of CPUS in a node. Request however many the job uses, and payu will pad the request to make sure the PBS submission is requesting an integer number of nodes if ncpus is greater than the number in a single node. PL: Rounds up for each model? AH: No, just the total. MW: Will spread models across ranks, so a rank can have different models on it.

AH: Andy Hogg ran out 80 odd submits with the tenth model. Occasional hang, resubmit ok. Might be more stable than raijin.

AH: Navid has MOM6 model that cannot run more than a couple of submits without it crashing with an error that it cannot find the executable. Weird error, let me know if you see anything similar.

NH: Caution with disks and where to put things. Reading input files can be very slow sometimes, or not, and then files not there and turn up later. If executable is missing, running off a disk that is not good? MW: Filesystems are very complicated on gadi? NH: Less certainty of performance with such a different system with data file systems being mounted separately. I’d look at this.
PD: Good place to look if disk has got caught up doing too many tasks. gdata just hangs, saving text file takes a while. Due to being on login node? Get similar delays with interactive job on execute node.
AH: People reporting issues with login delays. Probably a disk issue? Navid’s job is not being run from gdata, but from scratch. Inclined to blame new system of mounting. Could we use jobfs. MW: Like in the old days when we ran on the node? Good luck! AH: Could just do some tests. NH: Concerning if scratch is slow.
AH: Not sure if filesystems are mounted with NFS. MW: That is what we do on gaia, and have tons of problems with mount on demand. Biggest frustration with using GFDL machine. It’s a nightmare. At least NCI have lustre know-how. AH: Used to have a lot of problems with NFS cache errors in the past, files disappearing and reappearing. Does sound similar to Navid’s problem.
MW: Raijin’s filesystem was quite good. Why the change? AH: Security. Commercial in confidence stuff. I think it is overblown. Can’t seen anyone else’s jobs on the queue. Can’t even check it other people are running on the project. Are moving to 2-factor auth also.

What is required to get gadi transition into master for ACCESS-OM2

AH: Andrew Kiss is on personal leave but sent around an email:
re. gadi-transition, we could proceed like so:
– we’ve also been transitioning libaccessom2 to use submodules for its dependencies instead of cmake https://github.com/COSIMA/libaccessom2/issues/29 which would require this commit https://github.com/COSIMA/libaccessom2/tree/53a86efcd01672c655c93f2d68e9f187668159de (not currently in gadi-transition branch)
– get the libaccessom2 tests working https://github.com/COSIMA/libaccessom2/issues/36
– there’s a gadi-transition branch libaccessom2, cice and mom that could be merged into master. They use openMPI4.0.2
– there’s also a gadi-transition branch for all the primary (ie JRA, non-minimal) configurations but the exe paths would need to be updated before merging to master
– the access-om2 gadi-transition branch would then need to be updated to use the correct submodules for model components and configurations. We also want to remove the core and minimal config submodules https://github.com/COSIMA/access-om2/issues/183
also fyi the current gadi build instructions are here
AH: Feels urgent that people can use on gadi. Any comments on Andrew’s email?
PL: Transition to submodules finished? AH: That is on a separate branch. NH: I did that work. Put it in a dev branch. Not intending to be part of gadi transition to have least number additions. AH: Agree if that is the easiest. Master is broken for gadi, so anything that works is an improvement. If there is no feedback can do this offline. Could make a project to be explicit about what is required. NH: Given that gadi-transition does work. Andrew and Andy use it. Wouldn’t hurt to put it in now. Work that PL has done to make sure it does reproduce ticks that box. So ready to go. Able to reproduce if we need to. I’ll merge it and do some interactive testing. Then people can use it and I can do automatic testing.
PL: What branch will it be merged into? A lot of branches in a lot of repos.
NH: Isolate gadi-transition branches and merge into master straight away. Not bother with other development branches at this stage. Want to get something in master that people can use. In future bring everything into dev as discussed, with master staying stable, just bug fixes, until decide to update from dev. I’ll go through the branches and just bring in the gadi transition stuff. PL: So dev will have submodule changes and master will not? NH: For the time being. With previous discussion we’ll be slower moving on master, to make sure it is working. Having dev will allow us to move that more rapidly. People can run off dev at their own risk. AH: Submodules will remain a named feature branch and pulled into dev at some future time. Should discourage having personal development branches on the main repo. If you want to experiment do it on your own fork. Branches on the main repo should be master, dev or named feature to keep it clean and everyone can understand what they mean.

Stack array errors and heap-array option

AH: Apologies minutes from last TWG meeting are not on the COSIMA website. There is an IT issue with the server. We wanted to follow up with stack array errors.
AH: Did ever test on raijin with same compiler? Is there any way we can do comparative test? Use raijin image? Any more from Dale about this stack stuff? PL: Haven’t heard anything. AH: Last meeting some mention of there being a limit on UM stacksize. RY: Already fixed Ilia’s issue. Fixed by making stacksize unlimited. RF: Always run with unlimited stack size. When had problem only fixed by setting heap arrays small or zero. When I went into code and made array allocation from automatic to allocatable the error went away.
MW: If I have an automatic array I get three different heap allocations for three different compilers. RF: This option forces all arrays on to the heap.
AH: This was fixed a while ago Rui? RY: Not clear this is the same problem. Ilia’s issue was the end of 2019 when gadi first on line. Not sure it is the same issue.

BGC Update

AH: Russ forwarded an update to Andy Hogg.
RF: Work was completed on raijin in 2019. BGC code in to MOM and CICE. Required changes in CICE: moving arrays around to different modules due to scope issues which allow optional fields to be sent. Main one is to send 10m winds to ocean, not just the wind stress. Holding off to issue PR until gadi transition done so could go in clearly.
NH: Will be useful for JRA1.4 work.
RF: Hakase will be using it for BGC. Passing algae between ice and ocean components. To add new field, need to add field to code, but don’t have to be passed. Just picked up from namcouple using the flags in OASIS to see if it’s registered.
AH: Can this be the next cab off the rank after gadi-transition, before AKs science tweaks. Not relying on any changes in Andrews branches? RF: Would like to get gadi transition out of the way and then test these changes. Not tested on gadi yet.
How to proceed? Testing?
I’ve held off issuing a pull request until the dust settles wrt the gadi transition. There’s a bit of code rearrangement in order to allow optional fields (10m wind speed but this can be extended) to be passed from CICE.
The flags ACCESS-OM-BGC (tested) and ACCESS-ESM (untested) enable compilation of the BGC code. The 10m winds need to be added to the namcouple files and the MOM coupling fields namelist.
Work done on raijin last year. Changes in CICE to move arrays around in modules due to scope issues. Main one is to send 10m winds to ocean. No just wind stress. Holding off until gadi-transition done.
NH: Useful for stuff I’m doing with JRAv1.4.
RF: Hakase will use for BGC, passing algae between ice and ocean components. Have to change code to add fields. Don’t need to hard code as much. Once field in there optional to pass. Using the OASIS flags to see if registered.

JRA55-do counter-rotating cyclones

RF: Fortunately Paul Sandrey’s started in 1988. Last reverse cyclone in 1987. Cafe 60 use whole month window, so washed out on the average.
One of the RYF runs has reverse cyclone (83-84). Tell Kial.

Scaling

PL: Thanks to Marshall for getting me up to speed on scaling tests and sharing scripts. Can reproduce diagrams so can compare between raijin and gadi.
 AH: Any more performances numbers? PL: Now in a position to answer questions, just need to know what questions to ask.
AH: ACCESS-OM2-01 currently running around 5K cores, would love to be able to scale to 10K, 20K even better. MW: MOM scaled to 50K. AH: CICE doesn’t scale as well. MW: Any work on CICE distributions? RF: Nope. Would need to be done again at higher core counts. MW: Current one working really well. AH: On NH’s to-do list was to experiment with layouts and load balancing. MW: Alistair is very interesting in load balancing sea ice models. Particularly icebergs. Has some quasi lagrangian code in SIS2 to load balance icebergs. Maybe some ideas will translate or vice versa.
PL: For the moment will just look at MOM and see how it scales at 0.1? AH: Maybe just try doubling everything and see if it scales ok? MW: Used to make those processor heat maps to get the load imbalance of CICE. Would be good to keep an eye on that while working with scaling. Tony Craig (CICE developer) is very interested.

 Atmosphere/coupled models

 PD: Still using code frozen for CMIP runs. Extending number of runs in ensemble.
AH: People in CLEX are keen to run CM2. PD: Not aware, maybe through someone else, maybe Simon or Martin? CM2 and ESM-1.5 runs have been published under s38 project.
AH: Scott Wales doing an ultra high resolution atmosphere run over Australia, under  the STRESS2020 project. PD: Atmosphere only, do you know what resolution? I’ve also done some high res atmosphere only runs. On a project to improve turbulent kinetic energy spectrum in UM. Working on code to put stochastic back scatter into low res N96 (CMIP6) atmosphere. Got some good results injecting turbulent kinetic energy into small scales to improve artificial dissipation associated with semi-lagrangian timestep in UM. To test this is to see how improved N96 results compare to N512 runs using STRESS2020 resources. Working with Jorgen Fredrikson. Should talk to Scott.
AH: At the moment Scott is targeting 400m over Australia. PL: Convection resolving? AH: Planning a 2 day run to simulate Cyclone Debbie. Nested 400m run for Australia, inside BARRA at 2.2km. 10500×13000. PD: We’re going global. MW: How many levels? Same as global? PD: 85. AH: Major problem is running out of memory. MW: More cores should mean less memory. Maybe their Helmholtz server imposes some memory limit on the ranks. AH: Currently waiting for large memory nods to come online.

New FMS

MW: New FMS version coming. Targeting auto tools and getting rid of mkmf. If you’re on MOM5 you can use your frozen version. Completely rewritten IO in FMS. Now a thin wrapper to netCDF. No more magic functions like save_restart, write_restart. They have been replaced by lower level ops to allow model developers to have more control. Not sure MOM5 significance. AH: API compatible? MW: Keep compatible with old API as long as they can. Could dump it in and slowly integrate. Only raising in case you want to do more innovative stuff with IO. PL: Affects MOM6 mainly? MW: MOM6 is one of the main targets. PL: Parallel IO support? MW: Part of the reason. They want parallel IO in atmosphere model which NCAR now uses it. Now an important model. This implements the hooks for that work. RY: MPI-IO still there or be replaced by PIO? MW: It is. RY: Simpler to do one? MW: They’ve sent a patch to get MOM6 working with that now. Doesn’t work currently. Not sure about the progress, but know you were interested in PIO. RF: We’re interested from the ICE point of view. New version of BRAN will need daily inputs in CICE. Performance is terrible as IO is collected on to one processor.  MW: FMS will not help CICE, but a test case if PIO is a valid solution.

COSIMA 2019 Report

This report summarises the fourth meeting of the Consortium for Ocean Sea Ice Modelling in Australia (COSIMA), held in Canberra on 3-4 September 2019. Shweta Sharma has provided a more informal (and entertaining) report here.

Aims & Goals

The annual COSIMA workshop aims to:

  • Maintain and grow the established community around ocean-sea ice modelling in Australia;
  • Discuss recent scientific advances in ocean and sea ice research in a forum that is inclusive and model-agnostic, particularly including observational programs;
  • Agree on immediate next steps in the COSIMA model development plan; and
  • Develop a long-term vision for ocean-sea ice model development to support Australian researchers.

Participants


Attendees included Alberto Alberello (U Adelaide), Christopher Bladwell (UNSW), Fabio Boeira Dias (UTAS/CSIRO), Gary Brassington (BOM), Matt Chamberlain (CSIRO), Navid Constantinou (ANU), Prasanth Divakaran (BOM), Kelsey Druken (NCI), Matthew England (UNSW), Ben Evans (NCI), Hakase Hayashida (IMAS, UTAS), Petra Heil (AAD & AAPP), Andy Hogg (ANU), Ryan Holmes (UNSW), Maurice Huguenin (UNSW), Yi Jin (CSIRO), Andrew Kiss (ANU), Andreas Klocker (UTAS), Qian Li (UNSW), Kewei Lyu (CSIRO), Simon Marsland (CSIRO), Josue Martinez Moreno (ANU), Richard Matear (CSIRO), Ruth Moorman (ANU), Adele Morrison (ANU), Eric Mortenson (CSIRO), Jemima Rama (ANU), Paul Sandery (CSIRO), Abhishek Savita (UTAS/IMAS/CSIRO), Callum Shakespeare (ANU), Shweta Sharma (UNSW), Callum Shaw (ANU), Taimoor Sohail (ANU), Paul Spence (UNSW), Kial Stewart (ANU), Veronica Tamsitt (UNSW), Mirko Velic (BOM), Nick Velzeboer (ANU), Jingbo Wang (NCI), Xuebin Zhang (CSIRO), Xihan Zhang (ANU), Aihong Zhong (BOM), plus those who attended via video conference.

Program

Tuesday 3rd September

Session 1 (Chair – Navid Constantinou)

Andrew Kiss (ANU): ACCESS-OM2 update
Simon Marsland (CSIRO): ACCESS and CMIP6
Hakase Hayashida (IMAS, UTAS): Preliminary results of biogeochemistry simulation with ACCESS-OM2 and plans for OMIP-BGC and IAMIP
Ben Evans (NCI): Addressing the next HPC challenges for Climate and Weather

Session 2 (Chair – Andreas Klocker)

Veronica Tamsitt (UNSW): Lagrangian pathways and residence time of warm Circumpolar Deep Water on the Antarctic continental shelf
Ruth Moorman (ANU): Response of Antarctic ocean circulation to increased glacial meltwater
Kewei Lyu (CSIRO): Southern Ocean heat uptake and redistribution in theoretical framework and model perturbation experiments
Fabio Boeira Dias (UTAS/CSIRO): High-latitude Southern Ocean response to changes in surface momentum, heat and freshwater fluxes under 2xCO2 concentration

Session 3 (Chair – Simon Marsland)

Xuebin Zhang (CSIRO): Dynamical downscaling of climate changes with OFAM3
Matt Chamberlain (CSIRO): Multiscale data assimilation in Bluelink Reanalysis
Paul Sandery (CSIRO): A data assimilation framework for ocean-sea-ice prediction
Prasanth Divakaran (Bureau of Meteorology): OceanMAPS 3.3 Developments

Wednesday 4th September

Session 4 (Chair – Veronica Tamsitt)

Ryan Holmes (UNSW):  Atlantic ocean heat transport enabled by Indo-Pacific heat uptake and mixing
Eric Mortenson (CSIRO): Decoupling of carbon and heat uptake rates of the global ocean over the 21st century
Christopher Bladwell (UNSW): Diahaline transport in global ocean models
Abhishek Savita (UTAS, IMAS, CSIRO): Uncertainty in the estimation of global and regional ocean heat content since 1970
Gary Brassington (BOM): Comparison of ACCESS-OM2-01 to other models and observations

Session 5 (Chair – Qian Li)

Xihan Zhang (ANU): Gulf Stream separation in ACCESS-OM2
Alberto Alberello (U Adelaide): Impacts of winter cyclones on sea ice dynamics
Petra Heil (AAD): Sea ice in the ACCESS-OM2-01: Exploring near-coastal processes

COSIMA Discussion (Chair – Paul Spence)

Open discussion highlighted a number of potential avenues for work in the near-term, as well as some suggestions for directions that could be included in a future COSIMA funding bid.

Near-term Priorities

  • Running the COSIMA cookbook on the VDI is becoming untenable, and recent improvements in the cookbook have not been widely adopted. This should be a priority, possibly with a tutorial session at the CLEx Annual Workshop?
  • Start investigating coupled data assimilation for parameter estimation, especially for sea ice.
  • Start serious perturbation experiments with ACCESS-OM2-01, potentially including:
    • SAMx (RYF forcing with SAM Extreme Years).
    • Adding katabatic winds?
    • Tropical mixing and the AMOC.
    • Influence of the Amundsen Sea Low on the Southern Ocean.
    • Turbulent Kinetic Energy and winds in the Southern Ocean.
  • OMIP2 contribution for CMIP6.
  • Improve communication of COSIMA achievements.
  • Begin work on nesting regional MOM6 models.

Longer term suggestions

  • Better connection with Paleo community.
  • Improve links with the wave community.
  • Start running ensemble simulations?
  • Do we need to move to CICE6?
  • Capacity to run future scenarios based on coupled model output.

It was agreed that COSIMA V will be held in 2020, hosted by Xuebin Zhang in Hobart.

Additional discussion points are given here.

Awards

The COSIMA Most Selfless Contributor Awards for 2017, 2018 and 2019 were presented in absentia to

  • 2017 James Munroe
  • 2018 Marshall Ward
  • 2019 Russ Fiedler (pictured)

in appreciation of their tireless efforts which have greatly improved the software used by the COSIMA community.