Technical Working Group Meeting, December 2018

Minutes

Date: 11th December 2018
Attendees:

  • Marshall Ward (MW) (Chair) NCI
  • Aidan Heerdegen (AH) and Andy M Hogg (AMH) CLEX, Andrew Kiss (AK)  COSIMA, ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Nic Hannah (NH) Double Precision

COSIMA Models

Profiling

MW: Been profiling CICE, score-p profiling doesn’t work. Been timing by time step. Anomalously long time spent at step 72. AH: could it be atmosphere being updated. JRA55 is 3 hourly. Not sure timestep. MW: Seem to have lost my logs. Not sure best way to handle it.

CM2 Harmonisation update

AH: Peter has been testing release candidate. Russ supplied a diag_table which just outputs fields for first 2 time steps which is really good for seeing code issues. Russ found some bugs introduced by me. A couple of logic errors with preprocessor flags and omission of a couple of lines that got lost in translation. Confident latest update has squashed all the bugs. MW: Not old bugs? AH: Did find some old issues. Russ found a stuffed iceberg file. RF: Not related, but is something they were using for CMIP6. AH: Did find some old bugs, had to emulate the lack of reproducibility from a the readsea salinity fix timing bug to be able to closely reproduce CM2 output. Put a flag in to do the wrong thing to do the same as theirs, will remove before merging. MW: I thought reds fix had been changed to be faster but not reproducible. RF: That’s right, but not issue. This has to do with timing. Aidan fixed it, but not compatible with what they are using. AH: Just need something that reproduces CM2 output.

Narrator: The new way of doing salt fix will reproduce over time steps, but is not bit reproducible with the old algorithm. Don’t see that effect in these tests.

AH: Peter has a test suite which is old CM2, and a copy which uses updated MOM. He compiles the new code manually and runs the two suites side by side. Both use Russ’ diag_table. Just find out which fields don’t match. Most are the same, few different, seem to be affected by the same issue. Once we’re good for a few time steps then maybe look at them after a few months RF: Once chaos starts, hard to say. As long as nothing gross happening. Unless there is something further on with coupling. AH: Yes, look after a month and check it looks close. MW: Not trying to be bit reproducible? AH: Just want to fix my bugs. RF: Make sure you’re getting the same forcing fields. Can see out in the open ocean hardly any change. Just noise. This means we’re close. Saw the outline of where the forcing field is supposed to be. The bug in the forcing field data showed up, which indicated the issue. AH: Once we’ve confirmed fixed, will merge PR and then move on to ESM.

MW: Will the CM2 code remain in step with the MOM5 code? RF: CSIRO Aspendale not doing much code development at the moment. AH: Peter is pulling directly from his GitHub repo, but once it is harmonised they will pull directly from the MOM5 repo. They will want to have a tag and pull from the tag. RF: Yes they will want frozen versions. AH: Should have some automated tested, if we find a bug, should be able to updated CM2 code and confirm doesn’t change important answers.

AH: Short answer: Lots of progress. I made lots of bugs and Russ found them. Thanks Russ. NH: Yes thanks Russ.

Model reproducibility and payu bug

NH: working on documentation, wiki, tech report and model paper. Like to do more. Wiki doc easier as a brain dump. Made sure ACCESS-OM2 Jenkins tests are passing. Takes time something always seem to go wrong. Six tests passing and useful. Repro test working and now reproducing across restarts. Wasn’t working due to 1. payu bug, 2. red sea fix and 3. compiling with repro.
NH: Doing 2 runs with and without that payu bug on 1 and 0.25 degree. Doing 4 years as individual 1 year submits. Make sure bug not too serious. The way the coupling field restarts are done not good. Ocean has to write out a restart for cice (o2i.nc). Copy of restart file missing. Had in the past. Refactor with libaccessom2 and change of payu model driver didn’t carry this over. Means every first forcing fields that the ice model gets at the beginning of a new submit for the first coupling step are from the beginning of the run, not the previous run. Ice model is getting the wrong forcing for the first 3 hours.
MW: Has it been fixed? All runs affected? AK: Yes fixed now. Scope which runs affected. Only since YATM? NH: Yes. If your run uses YATM it will have this problem. Around the time the bug introduced. Restructured how config.yaml organised. Created libaccessom2 driver, and bug came in at that point. MW: Used to have oasis driver that did that. NH: Restart repro test existed but failing for other reasons, not being kept up to date. If that test was passing and then started failing, then would have been noticed. Doing a post mortem to see if there is anything significant on a 5 year run. Gut feeling, just in the ice. RF: Will just be the SST that it sees. If running a month at a time significant. Yearly not so important. Also depends what was in the initial coupling field. NH: Initial field correct, probably January. RF: Didn’t get updated for changes to landmasks? NH: Land has been eliminated so not necessary. NH: Any run which is a multiple of 1 year, problem is smaller. AH: Quarter and 1 degree aren’t that affected, tenth most affected. NH: Could do 1 month 1 degree runs. AH: Good idea. Don’t forget about runspersub option, could do 50 in a single submit. MW: payu restart flag now works as well. Could be useful for testing reproducibility. NH: This could be a problem in other cases as well. Existing restart is based on a specific time. May be correct for the specific model it was created for. RF: Should be matched to initial condition, with correct fields. MW: This is a cold start? NH: Needs to be created each time based on start time of your forcing. AH: Write code into model to read in IC and write back out to coupling fields? NH: Something like that might be good.
AK: Bunch of fields SST, SSS, SS velocity, SS slope, frazil ice formation energy. RF: SST and SSS only ones not zero in a cold start. AK: Replace by initial condition for entire experiment NH: There is a single file in the ACCESS-OM2 input directory that all experiments use. NH: Could diff that against what it should have been. MC: That is cold start bug, not so important. Warm start bug fixed? NH: yes fixed in latest version version 0.11.2. AK: People aren’t using that? MW: No, because it was broken. Now fixed. AH: Arguably should delete payu versions with the known warm start bug. Or back port the fix? MW: Don’t have framework to back port fix. AH: How many versions affected? NH: Put a warning message/assert in that stops and doesn’t let it load. MW: happy to delete old versions. Some people use a specific payu versions. Easy to put warnings in module files. Can also delete old ones. Not a huge problem.
AH: figure out which payu versions affected. Make a decision based on that. MW: Only those with libaccessom2. AH: Don’t delete straight away. Turn off modules first. See if there are people affected. AK: Could be people not using access-om2. AH: yes, but can use new versions. Need to make sure people not using buggy code. AK: Possibly move to new space. AH: yes, but might not be necessary. MW: May be impossible to back port fixes. Driver might not be functional. No problem doing backports, not sure how.
AH/MW: Might not need to back port, should:
  1. Confirm payu/0.11.2 working correctly
  2. Set as default version
  3. Determine which payu versions affected
  4. Turn off affected modules in modulefile and issue message about bug, what module to load and to email climate_help if users still has issues
  5. When complain assess individual cases
  6. If necessary move payu module to non-app path
  7. Delete old versions?
2 week time frame.
MW: People shouldn’t be encouraged not to specify module versions.
MW: Make sure 0.11.2 working correctly. Works for NH and AH. AK a good test for it as running. AK: Not running at the moment. Can we use old mppnccombine with payu/0.11.2. AH: Yes. MW: Use whichever you want. AH: works better for 1 deg in any case.
MW: added a restart directory feature. run 0 uses the restart and reset counters back to zero. AK: Had been copying stuff. MW: I’ve been symlinking and other hideous things. AK: Documents what you did better. AH: Used to have problems with drivers trying to delete symlinks when cleaning up restart directories.
AH: Will finish manifest this week. Chatted with Marshall and reimplementing it a bit differently. Will make NH’s job a lot easier. Run config has all the files, just need to clone and run. NH: awesome.
NH: Want any post-mortem or checking on tenth model for the payu bug? Could do some short 1 month runs. AK: Not sure what we would do with the information. Diagnosis without treatment. Interesting from an academic viewpoint. Planning to do a longer re-run with other changes and will be fixed in that. Interesting to see a couple of months and see scope of issue. Is it negligible? Maybe tell people AH: Choose a worse case: Southern summer? NH: Ok, might do that.

OpenMPI

MW: Been using OpenMPI/3.0.3. Working well. Speeds same as 1.10. uses ucx by default. Turn off all flags, except error aggregate if you want. Can try 3.1.3, had some issues. Likely the version on the next machine.
AH: Test on Jenkins with new OpenMPI? MW: Good idea
MW confirmed that using hyperthreading option in payu is harmless (might even be on by default).

COSIMA Models

Bathymetry

RF: Wanted to get rid of Ob river? 1150 looks good. Need an inlet to keep runoff in correct place. See GitHub issue. Plot shows 0.25 degree cell size is cut off.
AMH: Need to get rid off the Ob. Russ’ plot at 1150m looks good, maybe smooth out corners. RF: Have to look at index space, straight edges, no inlet, things like that. Depth is minimum depth, 10m, a lot more shallow in actuality. AK: Only real reason to keep it is to have the runoff in the right place. Had to smooth to stop model crashing. Main reason to keep is to make sure runoff is mapped correctly. AMH: Where is runoff coming from? Take it too far up and might get remapped to the wrong embayment. Why I like the minimal change. It is stable. AK: Yes since Russ’ fix that stops salinity drop below zero with ice formation. AH: If your map had water at depth zero, as opposed to land, then can follow the water along until it is > 0. Say this is water, use for remapping but not for model. AK: Need a separate file? AH: Not necessarily. Remapping using it’s own logic anyway. AK: Remapping takes no account for topography. NH: Could make the distance function smarter, use a directional weight, something like AH suggested, or take into account topography. AK: Go downslope.
RF: Other problem was Southhampton Island. Just taking out inlet was sufficient. AMH: Keep Island separated from mainland? RF: Yes. Hasn’t been causing problems? AK: No. AMH: Will leave cells smaller than 1150m. AK: Yes, but not too bad. Also an abrupt change in spacing. RF: Yes tripolar grid has discontinuity. AH: Cut of at 1150m, what was it before? AK: 880m. All crashes I had with ice remap error were less than 1100m. Those can be eliminated with closing channels. AMH: Worried about Southampton. AK: Never had issues there. Will be getting new constraints. Had to put damping on Kara Strait, and had issues with seamount off tip of Severny. AMH: Ok, keep it at 1150m and see.
AK: In quarter degree Baffin Island is attached to Canadian mainland. Tenth has much more open water. A lot of it extremely shallow (less than 100m), so unlikely important for sea water transport, but likely important for ice transport. AMH: And therefore fresh water transport. AH: Who will do this? RF: Planning to do it today or tomorrow. AMH: Awesome, thanks.

Profiling

AMH: getting different numbers between IAF and RYF due to AK needing more ice time steps in IAF case. He can’t run with ndtd=2, so load imbalanced to cice. ntdt=2 with minimal. AK: Time difference is due to value of ndtd. Ruth still getting bad departure points with minimal. Reduces ocean time step for a single submit. I reduced ndtd instead. AMH: This has caused a load imbalance. Not the same as our optimisation that NH targeted. NH used ndtd=2 in optimisation. AK using 50% more time.
MW: What optimisation? AMH: When NH looking at load balancing. AK using 50% more time steps, and taking 50% more time.
NH: Now have a rebalanced tenth minimal with ndtd=3. With the bathymetry changes might not need it. AH: Hold off on that until AK can tell if we need it. AK: May still on occasion need to reduce time step every 5 or 10 years, preferable to ndtd=3. IAF variability means can’t guarantee it will work with every year.
MW: OASIS timing issue. Struggling to define main loop time. Looking at 1 deg, outputting time of every time step. Not literally useful due to overhead. AH: Give you scaling? MW: Not sure.
MW: timing between 170-200ms per step. Step 32 get a big number. 36s in one, 72s in the other. Is it just waiting? Doing IO? Maybe some sort of OASIS thing happening to bootstrap. Get infrequent huge time steps. Run again and don’t get them. Going to remove the largest timestep. Anyone know what is causing this?
NH: What are you profiling? MW: Just the coupling step. Reporting the coupling code.
MW: Does it do a lot of IO on that first coupling step? NH: Yes it does on the first step. What about CICE diagnostics? Are they printing to ice_diag.d. Should be consistent. If it goes away?
RF: CICE does IO through one PE, so does a global collective. MW: Could be IO and MPI collective issues. Not sure if this is legitimate timing or not?
NH: Not sure what the bigger picture is, but find targeting specific routines to look at load imbalance. NH: definitely look into CICE diagnostics.
MW: Timing so inconsistent. AH: Run a bunch of use the minimum. Turn off all diagnostics. AH: For the paper MOM scales well. Need to say something about CICE scaling. Doesn’t need to be the final word. MOM gives some leeway and these are the best configurations …
NH: Happy to help. Can do more fine grained stuff. Do some counting. MW: like score-p but it dies with CICE.

Grid scale noise

 RF: Chris Chapman problem with submeso scale stuff (see issue). There is a smoothing feature in submeso but says it doesn’t reproduce. Think I found a bug. Does smoothing of mixed layer. Possible to put mixed layer into rock with smoothing, doesn’t seem to be any check. Might get some others to look at it. If they agree we might be able to fix it and reduce the checkerboard. AK: This in MOM6? Also in MOM5? RF: There is a namelist parameter, says not to use because not repro, but because buggy. No reason it shouldn’t reproduce.
MW: Is this filtering a numerical mode? AK: KPP purely numerical, so adjacent columns can decouple. RF: Will point out code and see if people agree. AK: Get fixed and could be good to put in for next tenth degree run.

Technical Working Group Meeting, November 2018

Minutes

Date: 13th November 2018
Attendees:

  • Marshall Ward (MW) (Chair) NCI
  • Aidan Heerdegen (AH) CLEX, and Andrew Kiss (AK) COSIMA, ANU
  • Russ Fiedler (RF), Matt Chamberlain (MC) CSIRO Hobart
  • Nic Hannah (NH) Double Precision

Payu update

MW: payu is now python3 compatible. Can be run from a .local install. No longer uses modules. Tagged 0.11.1. AH: will also install into conda environment on raijin. NH: sounds great. Have used in conda python27 environment. Good to have python3.

MW: Want to get this to a position where I can leave it to others to support. Might get GFDL interested in it. Have time to wrap up a lot of things.

AK: How rigid is the 3 digit output in archives? MW: Only a print statement. Should work with higher numbers. AH: Won’t list nicely. MW: Had meant to add a format option.

MW: Try out the new payu, want to make it the new one.

TWG Organisation

AH: Peter Dobrohotoff sends his apologies, cannot make meeting.

AH: Do we need to decide on a new chairman? MW: Happy to resign straight away. MW: Considering going to GFDL in February to bootstrap remote working. AH: Nic did you want to take over? NH: No, convinced it’s not a good idea. AH: Sort something out next meeting. Last one of the year?

OM2/CM2 MOM5 Harmonisation

AH: Peter Dobrohotoff sends his apologies, cannot make meeting.

AH: Shame, as PD tested harmonised MOM5 but used incorrect namelist options. Losing some momentum as would like to have that checked off so we could start harmonised ESM.

MC: Wrong namelist options? RF: Didn’t have correct namelist options to include my new mixing scheme.

AH: They were also concerned about background mixing, in case that as having an effect. Could point them to the relevant PR/Issue on GitHub with all the plots and documentation showing it was working correctly.This is a valuable resource and a good way of working.

AH: Richard Matear contacted me and wanted to know status of ESM harmonisation and what he could do to help it progress.

ESM MOM5 Harmonisation

ESM is using an version of MOM5 updated to the beginning of the year with WOMBAT added by MC. It was decided to not continue with ESM harmonisation until CM2 bedded down, as it requires some of the code changes from the CM2 harmonisation.

MW: Is cylc suite used for testing using the MOM repo compilation? AH: I believe Peter is currently turning off the automatic compilation in the suite and using the repo compilation script on the command line to create the MOM executable. MW: I think improving/streamlining and harmonising the cylc suite is as important as harmonising the code. AH: I would have liked a test suite that incorporates this, but this is the way Peter has been comfortable working. Can’t progress ESM until have the all clear that CM2 is working correctly.

MW: Richard & Matt are using the ESM model? Not using GFDL stack? MC: GFDL in decadal project. This harmonisation will get WOMBAT in the MOM5 master branch, which has long been a goal. Decadal project not using UM. Research effort there is data assimilation and reanalysis that they’re running, rather than updating the model. Won’t hear much from them. When AH gets the WOMBAT code in, please contact us. Have experience using payu runs at 0.25 deg. Will fix you up with files and help when it comes to testing. AH: I will mostly be relying on others to run stuff. MC: Not running under payu currently. Out of the loop at the moment. Did run one of Kial’s runs. I need help running with payu. AH: Yes we can do that together. Holger is trying to get payu to run with ESM, making slow progress. MW: Who is supporting the ESM? Surprised Richard will contact AH. AH: Someone told him it was part of this code harmonisation. MC: Richard’s interest is WOMBAT in MOM5. AH: ESM will be the CLEX coupled model. MW: Who will be responsible for ESM? AH: Tilo is doing a CMIP6 submission with ESM. CLEX will be wanting to use all of Tilo’s runs to spin off their own experiments. At that point CLEX CMS will support the model with payu on NCI HPC systems. MC: Tilo is doing the work of the equivalent of the entire CM2 team to get ESM working. We support Tilo somewhat, and there are others who are no longer formally part of the team but contribute. Richard has interests in this space also. AH: Tilo can benefit from work we’re doing. Scott found a 10% improvement in UM speed. MC: Speed/efficiency not a priority right now. Focus on land model, forcing etc. Not #1 priority. AH: If they’re open to that input, they can still get the benefit even if it isn’t their focus.

MC: Have to go to another meeting.

AH: Do we need to move the meeting? MW: Happy to move to another day if it works for people. AH: Doodle poll? RF: Next year MW: Ok, do something next year.

Minimal 0.1 MOM

AH: Working really well. Would like to get it to run a little faster so she could get 2 months per submit. Which would improve her throughput a lot. Does NH have any ideas to speed it up? Seems like MOM model has 10 minutes of spare time. Is it CICE bound? NH: That might be the initialisation time. AH: Don’t MOM timings take into account initialisation? NH: Not until recently. Marshall fixed it. MW: Clocks weren’t showing MPI initialisation time, just MOM initialisation time. AH: Could be 10 minutes? MW: I would be surprised. Would guess 5 mins, less than 10. Big model, spends a lot of time on field exchange.

AH: Not compiled with AVX2. Will that help? MW: Did AVX and AVX2 test. Could see the difference, wasn’t large enough to bother making non-compatible binaries. NH: Slack me the path and I can take a look. Haven’t spent a lot of time trying to optimise it. MW: Can sometimes improve time by changing layout. GFDL tried very long tiles that means halo updates are only north/south. NH: She might have an older config. I switched to Sandybridge for efficiency. MW: I think you can get 7-8% speed up going from AVX to AVX2. AH: I advised Ruth to use broadwell due to memory requirements making for better throughput. AH: I suggested she have higher diag_steps, but RF pointed out global scalars mean she is doing daily global MPI calls. RF said doesn’t necessarily have to be this way? Could it be changed in FMS? RF: Yeah, all the diagnostic code. In many cases time average can be commuted with area average. Every timestep doing a MPP sum or MPP global sum, can do local sums on local process and call MPP sum when you need to. Could rewrite the MOM code to do cumulative average and do an instantaneous output. Sort of fudge an average. MW: Wouldn’t make much difference to speed. RF: Depends how much those global sums are hurting you if any, but it might be a single global sum acts as a synchronisation point and doesn’t really matter. MW: I told NCI MOM didn’t do collectives because they were so fast they didn’t show up in profile. So unlikely to help a lot. RF: MOM collectives are very simple. If not doing bitwise stuff, just taking a collective of one number. MW: Caveat, only tested at 0.25 deg, so can’t know for sure it is the same at 0.1 deg. Should do it because it will eventually start to bite. Could do a profile?

AH: AK only does 2 months/submit, so maybe we would all be better running a minimal config? MW: doesn’t bode well for exascale. AH: So many constraints with PBS etc. AK: Optimised for model+machine+queue constraints not for the model on its own. NH: Could just bump all the cpus by 10%? AH: You’ve got a nice sweet spot there, first try AVX2 and see if we can get the speed up we need.

MW: Vectorisation can help. AH: Would bigger tiles help? MW: So much time moving in and out of L1 cache that it doesn’t make much difference. AH: Broadwell got bigger caches? MW: Bigger L3 but 12 more cores. NH: Is she using 600s timestep without crashing? AH: Had an ice remap crash after a couple of years. AK: Unclear if that will happen again. Doing RYF, and I found once the crashes started happening they kept happening. RF: Using latest bathymetry? AK: Yes. RF: Any difference? AK: No idea. NH: I think it  is generally more stable.

MW: Can play with barotropic halo. Barotropic solver has halo of 10 so it can do it’s work every 10 steps. Might be able to get some speed up by playing with that. AH: Put path to Ruth’s control directory on TWG slack channel.

NCRIS ACCESS meeting

NCRIS is doing a scoping study to see if it is feasible for a team of 15-20 people to support ACCESS modelling in Australia, which would be used for submission for funding from NCRIS. The meeting was to get feedback to help write the submission.

Some discussion of the experience of the meeting.

Calendar issues

RF: MOM uses Proleptic Gregorian calendar type, but does not use the correct calendar attribute when outputting the file. It sets it as Gregorian instead. So, when using days since 01/01/0001 there is a jump in October 1582 depending on which calendar is used. Get a 2 day offset for IAF files because of this incorrect calendar attribute. Found python netCDF interface uses udunits calendars and has problems. Had to force it to proleptic gregorian to read dates correctly. Big issue when dealing with daily data. Output files need to be fixed. Could change the calendar attribute to Proleptic Gregorian or change units to be days since a year after 1582. MW: GFDL use since 1900? RF: Yes, as this is what Ferret uses.

AH: Had a lot of date issues using python. Uses date library from numpy as there is limited date range available due to nanosecond resolution. We often have to do date offsets anyway, so probably don’t see this issue as much. Should we put proleptic gregorian into MOM? MW: Shouldn’t we change the start date? RF: That is the easiest thing. There is a lot of broken software that doesn’t treat these calendars correctly. MW: should tell GFDL about this RF: Looked at the code and made some changes, but not uploaded. AK: Is MOM using the correct dates? As with coupling to CICE etc? RF: Works ok internally. AH: Arguably a bug if they’re using proleptic and not using correct attribute. RF: Yes. CMIP6 accounts for this. Checks for dates before 1582 and requires using proleptic gregorian. Future runs should have an offset of some later date.

RF: Getting huge number of messages from restoring files starting at year 0000. Restoring files on a time modulo axis and created from Ferret, which automatically treats any file with a start year of zero or one as modulo. However year zero does not exist and is incorrect. Just need to change that attribute in the restoring files, won’t make any difference to operation but save a huge number of warning messages. MW: I get 482,000 lines of errors. I would be very happy if this is fixed. MW: Someone should change those fields. RF: I don’t have access. MW: Should go and edit the public forcing fields. What specific files? AH: If you’re talking about the ACCESS-OM2 configs, NH has the most ability to change them. MW: salt_sfc_restore? RF: Yes, and temperature, chlorophyll. Anything seasonal, a restoring. MW: Anything that says “months since 0000”? AH: Yes change to 0001. RF: Anything that uses that date (zero years) can be changed. AK: Anything that isn’t JRA that isn’t multiyear? RF: Maybe runoff? Do we use the JRA runoff? The problem really is the stuff MOM reads directly, like sponges. NH: I am happy to look into this. I might be the only one with access, hope not. Have been thinking about this for a while. Changed the OASIS code as well to ensure DEBUG_LEVEL zero does not output anything. Was outputting thousands of lines. Also an Andy Hogg GitHun issue and this was the next one on my list. MW: So far I can only find salt restore and ssw shortwave. RF: Shouldn’t be using that. Should be using GFDL formulation which reads in chl.nc.

AK: Some files in those ACCESS-OM2 input tarballs that aren’t used. Should they be removed? NH: Posted on slack about this? AK: Yes, but not sure they aren’t used by someone. NH: Bit messy how this is done. Should really just have a bunch of files and grab what they need. Would save a lot space. Currently versioning sets of files rather than individual files.

mppnccombine-fast

AK: Some issues on GitHub. RF: Been discussing this with AH and Scott Wales. An attribute needs to be removed. AH: Biggest issue is regional outputs having incorrect dimensioning. Which has been fixed. Also fixed the unlimited dimension getting squashed. Also another issue with passing too many files on the command line due to an MPI issue. Requires a change to payu as globbing is now done internally so any glob needs to be quoted. It’s on my list of tasks.

MW: Original tool used a pattern? AH: Didn’t implement that in mppnccombine-fast, maybe we should? MW: Stopped doing that in payu to support some coupled FMS codes where tiles didn’t start at zero, but could go back to the old way. AH: Does using the pattern work with masked configs when tiles are missing? I can’t recall. MW: Not sure.

ACCESS-OM2 disk usage

AH: AK and I went through some of the 0.1 deg output directories and found we could get significant space savings in the ice diagnostics AK: Ice outputs are not compressed, daily data is in individual files half of which is grid data. Can get a 8 fold decrease in size. Out of 20TB of total data can save 12TB of space. AH: Want to make a post processing script to run this automatically. AK: Yes, also delete all the zero length log files AH: This was to clean up for archiving. MW: payu should do this, maybe not looking in the right places. NH: FORTRAN has an option when closing a file to delete if empty, so looking into that. Also some CICE logs just have one line at the top with exactly the same text. AH: Yes we found those, matched the same number of bytes and deleting them. MW: If payu isn’t deleting zero length files not sweeping through submodes. AH: A lot tidier after cleaning. AK: Yes an hour well spent.