Testing E3SM Diagnostics
Unit and integration tests
Run all automated tests by doing the following:
pip install . # Install your changes ./tests/test.sh # Run all unit and integration tests
If these tests pass, you’re done. If they fail unexpectedly however, your code might have changed what the output looks like.
ls tests/integration/image_check_failures
will show you all the images that differ from expected.
If you see unexpected images listed, you’ll have to investigate why your code changed what the images look like.
If the only images listed are ones you expected to see changed, then you need to update the expected images.
cd /lcrc/group/e3sm/public_html/e3sm_diags_test_data/integration/expected cat README.md # This will show you the version, date, and hash of the current expected images. # Using that information do the following: mv integration_test_images previous_output/integration_test_images_<version>_<date>_<hash> # `cd` back into your E3SM Diags directory. # Your output will now become the new expectation. mv all_sets_results /lcrc/group/e3sm/public_html/e3sm_diags_test_data/integration/expected/integration_test_images cd /lcrc/group/e3sm/public_html/e3sm_diags_test_data/integration/expected
Run ./tests/test.sh
again. Now, the test should pass.
After merging your pull request, edit README.md
.
The version should be the version of E3SM Diags you ran ./tests/test.sh
with,
the date should be the date you ran ./tests/test.sh
on,
and the hash should be for the top commit shown by git log
or on
https://github.com/E3SM-Project/e3sm_diags/commits/main.
Automated tests
We have a GitHub Actions Continuous Integration / Continuous Delivery (CI/CD) workflow.
The unit and integration tests are run automatically as part of this.
Complete run test
tests/integration/complete_run.py
checks the images generated by all diagnostics to
see if any differ from expected.
This test is not run as part of the unit test suite, because it relies on a large
quantity of data found on LCRC (Anvil/Chrysalis).
Warning
You have to run this test manually. It is not run as part of the CI/CD workflow.
If you’ve been developing code on a different machine, you can get the code on a LCRC machine by doing the following. First, push your code to GitHub. Then log into a LCRC machine and run:
git fetch <fork-name> <branch-name> # Fetch the branch you just pushed git checkout -b run-lcrc-test <repo-name>/<branch-name>
Now that you have your changes on LCRC, enter your development environment. Then:
pip install . # Install your changes pytest tests/integration/complete_run.py
If this test passes, you’re done. If it fails however, that means your code has changed what the output looks like.
ls image_check_failures
will show your all the images that differ from expected.
If you see unexpected images listed, you’ll have to investigate why your code changed what the images look like.
If the only images listed are ones you expected to see changed, then you need to update the expected images.
cd /lcrc/group/e3sm/public_html/e3sm_diags_test_data/unit_test_complete_run/expected cat README.md # This will show you the version, date, and hash of the current expected images. # Using that information do the following: mv all_sets previous_output/all_sets_<version>_<date>_<hash> mv image_list_all_sets.txt previous_output/image_list_all_sets_<version>_<date>_<hash>.txt # `cd` back into your E3SM Diags directory. # Your output will now become the new expectation. mv <version>_all_sets/ /lcrc/group/e3sm/public_html/e3sm_diags_test_data/unit_test_complete_run/expected/all_sets cd /lcrc/group/e3sm/public_html/e3sm_diags_test_data/unit_test_complete_run/expected/all_sets # This file will list all the expected images. find . -type f -name '*.png' > ../image_list_all_sets.txt cd ..
Run pytest tests/integration/complete_run.py
again. Now, the test should pass.
After merging your pull request, edit README.md
.
The version should be the version of E3SM Diags you ran
pytest tests/integration/complete_run.py
with,
the date should be the date you ran pytest tests/integration/complete_run.py
on,
and the hash should be for the top commit shown by git log
or on
https://github.com/E3SM-Project/e3sm_diags/commits/main.