Commit ef746cfa authored by mashun1's avatar mashun1
Browse files

veros

parents
Pipeline #1302 canceled with stages
Veros state class
=================
.. autoclass:: veros.state.VerosState
:members:
:undoc-members:
.. autoclass:: veros.state.VerosSettings
:members:
.. autoclass:: veros.state.VerosVariables
:members:
Command line tools
==================
After installing Veros, you can call these scripts from the command line from any location on your system.
veros
-----
This is a wrapper script that provides easy access to all Veros command line tools.
.. run-click:: veros.cli.veros:cli
veros-create-mask
-----------------
.. run-click:: veros.cli.veros:cli
:args: create-mask --help
veros-copy-setup
----------------
.. run-click:: veros.cli.veros:cli
:args: copy-setup --help
veros-resubmit
--------------
.. run-click:: veros.cli.veros:cli
:args: resubmit --help
veros-run
---------
.. run-click:: veros.cli.veros:cli
:args: run --help
.. _diagnostics:
Diagnostics
===========
Diagnostics are separate objects (instances of subclasses of :class:`VerosDiagnostic`)
responsible for handling I/O, restart mechanics, and monitoring of the numerical
solution. All available diagnostics are instantiated and added to a dictionary
attribute :attr:`VerosState.diagnostics` (with a key determined by their `name` attribute).
Options for diagnostics may be set during the :meth:`VerosSetup.set_diagnostics` method:
::
class MyModelSetup(VerosSetup):
...
def set_diagnostics(self, state):
diagnostics = state.diagnostics
diagnostics['averages'].output_variables = ['psi','u','v']
diagnostics['averages'].sampling_frequency = 3600.
diagnostics['snapshot'].output_variables += ['du']
Base class
----------
This class implements some common logic for all diagnostics. This makes it easy
to write your own diagnostics: Just derive from this class, and implement the
virtual functions.
.. autoclass:: veros.diagnostics.base.VerosDiagnostic
:members: name, initialize, diagnose, output
Available diagnostics
---------------------
Currently, the following diagnostics are implemented and added to
:obj:`VerosState.diagnostics`:
Snapshot
++++++++
.. autoclass:: veros.diagnostics.snapshot.Snapshot
:members: name, output_variables, sampling_frequency, output_frequency, output_path
Averages
++++++++
.. autoclass:: veros.diagnostics.averages.Averages
:members: name, output_variables, sampling_frequency, output_frequency, output_path
CFL monitor
+++++++++++
.. autoclass:: veros.diagnostics.cfl_monitor.CFLMonitor
:members: name, sampling_frequency, output_frequency
Tracer monitor
++++++++++++++
.. autoclass:: veros.diagnostics.tracer_monitor.TracerMonitor
:members: name, sampling_frequency, output_frequency
Energy
++++++
.. autoclass:: veros.diagnostics.energy.Energy
:members: name, sampling_frequency, output_frequency, output_path
Overturning
+++++++++++
.. autoclass:: veros.diagnostics.overturning.Overturning
:members: name, p_ref, sampling_frequency, output_frequency, output_path
Python API
==========
.. toctree::
:maxdepth: 3
api/veros-setup
api/veros-state
api/routines
api/runtime
api/tools
api/operators
api/distributed
Model settings
==============
The following list of available settings is automatically created from the file :file:`settings.py` in the Veros main folder.
They are available as attributes of the :class:`Veros settings object <veros.state.VerosSettings>`, e.g.: ::
>>> simulation = MyVerosSetup()
>>> settings = simulation.state.settings
>>> print(settings.eq_of_state_type)
1
.. exec::
from veros.settings import SETTINGS
for key, sett in SETTINGS.items():
print(".. _setting-{}:".format(key))
print("")
print(".. py:attribute:: VerosSettings.{} = {}".format(key, sett.default))
print("")
print(" {}".format(sett.description))
print("")
Setup gallery
=============
This page gives an overview of the available model setups. To copy the setup file and additional input files (if applicable) to the current working directory, you can make use of the :command:`veros copy-setup` command.
Example::
$ veros copy-setup acc
.. seealso::
More setups are available through the `veros-extra-setups plugin <https://veros-extra-setups.readthedocs.io>`_.
Idealized configurations
------------------------
+-------------------------------------------+-------------------------------------------+
| :doc:`/reference/setups/acc` | :doc:`/reference/setups/acc_basic` |
| | |
| |acc| | |acc_basic| |
+-------------------------------------------+-------------------------------------------+
.. |acc| image:: /_images/gallery/acc.png
:width: 100%
:align: middle
:target: setups/acc.html
:alt: Steady-state stream function
.. |acc_basic| image:: /_images/gallery/acc_basic.png
:width: 100%
:align: middle
:target: setups/acc_basic.html
:alt: Steady-state stream function
.. toctree::
:hidden:
setups/acc
setups/acc_basic
Realistic configurations
------------------------
+--------------------------------------------+-------------------------------------------+
| :doc:`/reference/setups/flexible` | :doc:`/reference/setups/4deg` |
| | |
| |flexible| | |4deg| |
+--------------------------------------------+-------------------------------------------+
| :doc:`/reference/setups/1deg` | :doc:`/reference/setups/north-atlantic` |
| | |
| |1deg| | |northatlantic| |
+--------------------------------------------+-------------------------------------------+
.. |flexible| image:: /_images/gallery/flexible.png
:width: 100%
:align: middle
:target: setups/flexible.html
:alt: Surface velocity at 0.25x0.25 degree resolution
.. |northatlantic| image:: /_images/gallery/north-atlantic.png
:width: 100%
:align: middle
:target: setups/north-atlantic.html
:alt: Resulting average surface speed
.. |4deg| image:: /_images/gallery/4deg.png
:width: 100%
:align: middle
:target: setups/4deg.html
:alt: Stream function after 50 years
.. |1deg| image:: /_images/gallery/1deg.png
:width: 100%
:align: middle
:target: setups/1deg.html
:alt: Stream function
.. toctree::
:hidden:
setups/flexible
setups/4deg
setups/1deg
setups/north-atlantic
Global one-degree model
=======================
.. autoclass:: veros.setups.global_1deg.GlobalOneDegreeSetup
Global four-degree model
========================
.. autoclass:: veros.setups.global_4deg.GlobalFourDegreeSetup
ACC channel model
=================
.. autoclass:: veros.setups.acc.ACCSetup
ACC basic model
=================
.. autoclass:: veros.setups.acc_basic.ACCBasicSetup
Global flexible resolution setup
================================
.. autoclass:: veros.setups.global_flexible.GlobalFlexibleResolutionSetup
North Atlantic regional model
=============================
.. autoclass:: veros.setups.north_atlantic.NorthAtlanticSetup
.. _variables:
Model variables
===============
The variable meta-data (i.e., all instances of :class:`~veros.variables.Variable`)
are available in a dictionary as the attribute :attr:`VerosState.var_meta <veros.state.VerosState.var_meta>`. The actual
data arrays are attributes of :attr:`VerosState.variables <veros.state.VerosState.variables>`:
::
state.variables.psi # data array for variable psi
state.var_meta["psi"] # metadata for variable psi
Variable class
--------------
.. autoclass:: veros.variables.Variable
Available variables
-------------------
There are two kinds of variables in Veros. Main variables are always present in a
simulation, while conditional variables are only available if their respective
condition is :obj:`True` at the time of variable allocation.
.. _flag_legend:
Attributes:
| :fa:`clock`: Time-dependent
| :fa:`question-circle`: Conditional
| :fa:`repeat`: Written to restart files by default
.. exec::
from veros.variables import VARIABLES
def format_field(val):
import inspect
if isinstance(val, (tuple, list)):
return "(" + ", ".join(map(str, val)) + ")"
if not callable(val):
return val
src = inspect.getsource(val)
src = src.strip().rstrip(",")
return f"``{src}``"
seen = set()
for key, var in VARIABLES.items():
is_conditional = callable(var.active)
flags = ""
if var.time_dependent:
flags += ":fa:`clock` "
if is_conditional:
flags += ":fa:`question-circle` "
if var.write_to_restart:
flags += ":fa:`repeat` "
print(f".. py:attribute:: VerosVariables.{key}")
if key in seen:
print(" :noindex:")
print("")
print(f" :units: {format_field(var.units)}")
if var.dims is not None:
print(f" :dimensions: {format_field(var.dims)}")
else:
print(f" :dimensions: scalar")
print(f" :type: :py:class:`{format_field(var.dtype) or 'float'}`")
if is_conditional:
condition = format_field(var.active).replace("active=", "")
print(f" :condition: {condition}")
print(f" :attributes: {flags}")
print("")
print(f" {format_field(var.long_description)}")
print("")
seen.add(key)
click==8.1.7
entrypoints==0.4
Pillow==10.3.0
Sphinx==7.3.7
furo==2024.5.6
ipython==8.25.0
netCDF4==1.7.0
xarray==2024.6.0
matplotlib==3.9.0
cmocean==4.0.3
pickleshare==0.7.5
\ No newline at end of file
{
"averages": {
"url": "https://sid.erda.dk/share_redirect/CD8UzHCj2Q/inputdata/tutorial_analysis/4deg.averages.nc",
"md5": "1f73ab5052cf19bdea33db6e8e0760f2"
},
"overturning": {
"url": "https://sid.erda.dk/share_redirect/CD8UzHCj2Q/inputdata/tutorial_analysis/4deg.overturning.nc",
"md5": "747b09df66eed31a5a1a1f77913ee26b"
},
"energy": {
"url": "https://sid.erda.dk/share_redirect/CD8UzHCj2Q/inputdata/tutorial_analysis/4deg.energy.nc",
"md5": "1ac08c1f179ce31507a84feed1edc9b2"
},
"snapshot": {
"url": "https://sid.erda.dk/share_redirect/CD8UzHCj2Q/inputdata/tutorial_analysis/4deg.snapshot.nc",
"md5": "c21a31bdff940c30e18dc9def6658ba1"
}
}
\ No newline at end of file
Analysis of Veros output
========================
In this tutorial, we will use `xarray <http://xarray.pydata.org/en/stable/>`__ and `matplotlib <https://matplotlib.org>`__ to load, analyze, and plot the model output. We will also use the `cmocean colormaps <https://matplotlib.org/cmocean/>`__. You can run these commands in `IPython <https://ipython.readthedocs.io/en/stable/>`__ or a `Jupyter Notebook <https://jupyter.org>`__. Just make sure to install the dependencies first::
$ pip install xarray matplotlib netcdf4 cmocean
The analysis below is performed for 100 yr integration of the :doc:`global_4deg </reference/setups/4deg>` setup from the :doc:`setup gallery </reference/setup-gallery>`.
If you want to run this analysis yourself, you can `download the data here <https://sid.erda.dk/cgi-sid/ls.py?share_id=CD8UzHCj2Q;current_dir=inputdata/tutorial_analysis;flags=f>`__. We access the files through the dictionary ``OUTPUT_FILES``, which contains the paths to the 4 different files:
.. ipython:: python
OUTPUT_FILES = {
"snapshot": "4deg.snapshot.nc",
"averages": "4deg.averages.nc",
"overturning": "4deg.overturning.nc",
"energy": "4deg.energy.nc",
}
.. ipython:: python
:suppress:
# actually, we are loading input files through the Veros asset mechanism
import os
from veros import tools
OUTPUT_FILES = tools.get_assets("tutorial_analysis", os.path.join("tutorial", "analysis-assets.json"))
Let's start by importing some packages:
.. ipython:: python
import xarray as xr
import numpy as np
import cmocean
Most of the heavy lifting will be done by ``xarray``, which provides a data structure and API for working with labeled N-dimensional arrays. ``xarray`` datasets automatically keep track how the values of the underlying arrays map to locations in space and time, which makes them immensely useful for analyzing model output.
Load and manipulate averages
----------------------------
In order to load our first output file and display its content execute the following two commands:
.. ipython:: python
ds_avg = xr.open_dataset(OUTPUT_FILES["averages"])
ds_avg
We can easily access/modify individual data variables and their attributes. To demonstrate this let's convert the units of the barotropic stream function from :math:`\frac{m^{3}}{s}` to :math:`Sv`:
.. ipython:: python
ds_avg["psi"] = ds_avg.psi / 1e6
ds_avg["psi"].attrs["units"] = "Sv"
Now, we select the last time slice of ``psi`` and plot it:
.. ipython:: python
:okwarning:
@savefig psi.png width=5in
ds_avg["psi"].isel(Time=-1).plot.contourf(levels=50, cmap="cmo.balance")
In order to compute the decadal mean (of the last 10yrs) of zonal-mean ocean salinity use the following command:
.. ipython:: python
:okwarning:
@savefig salt.png width=5in
(
ds_avg["salt"]
.isel(Time=slice(-10,None))
.mean(dim=("Time", "xt"))
.plot.contourf(levels=50, cmap="cmo.haline")
)
One can also compute meridional mean temperature. Since the model output is defined on a regular latitude / longitude grid, the grid cell area decreases towards the pole.
To get an accurate mean value, we need to weight each cell by its area:
.. ipython:: python
ds_snap = xr.open_dataset(OUTPUT_FILES["snapshot"])
# use cell area as weights, replace missing values (land) with 0
weights = ds_snap["area_t"].fillna(0)
Now, we can calculate the meridional mean temperature (via ``xarray``'s ``.weighted`` method) and plot it:
.. ipython:: python
:okwarning:
@savefig temp.png width=5in
temp_weighted = (
ds_avg["temp"]
.isel(Time=-1)
.weighted(weights)
.mean(dim="yt")
.plot.contourf(vmin=-2, vmax=22, levels=25, cmap="cmo.thermal")
)
Explore overturning circulation
-------------------------------
.. ipython:: python
ds_ovr = xr.open_dataset(OUTPUT_FILES["overturning"])
ds_ovr
Let"s convert the units of meridional overturning circulation (MOC) from :math:`\frac{m^{3}}{s}` to :math:`Sv` and plot it:
.. ipython:: python
:okwarning:
ds_ovr["vsf_depth"] = ds_ovr.vsf_depth / 1e6
ds_ovr.vsf_depth.attrs["long_name"] = "MOC"
ds_ovr.vsf_depth.attrs["units"] = "Sv"
@savefig vsf_depth_2d.png width=5in
ds_ovr.vsf_depth.isel(Time=-1).plot.contourf(levels=50, cmap="cmo.balance")
Plot time series
----------------
Let's have a look at the ``Time`` coordinate of the dataset:
.. ipython:: python
ds_ovr["Time"].isel(Time=slice(10,))
We can see that it has the type ``np.timedelta64``, which by default has a resolution of nanoseconds. In order to have a more
meaningful x-axis in our figures, we add another coordinate "years" by dividing ``Time`` by the length of a year (360 days in Veros):
.. ipython:: python
years = ds_ovr["Time"] / np.timedelta64(360, "D")
ds_ovr = ds_ovr.assign_coords(years=("Time", years.data))
Let's select values of array by labels instead of index location and plot a time series of the overturning minimum between 40°N and 60°N and 550-1800m depth, with years on the x-axis:
.. ipython:: python
@savefig vsf_depth_min.png width=5in
(
ds_ovr.vsf_depth
.sel(zw=slice(-1810., -550.), yu=slice(40., 60.))
.min(dim=("yu", "zw"))
.plot(x="years")
)
Running Veros on a cluster
==========================
This tutorial walks you through some of the most common challenges that are specific to large, shared architectures like clusters and supercomputers.
In case you having trouble setting up or running Veros on a cluster, you should first contact your cluster administrator. Otherwise, feel free to `open an issue <https://github.com/team-ocean/veros/issues>`__.
Installation
++++++++++++
Probably the easiest way to try out Veros on a cluster is to, once again, :doc:`use Anaconda </introduction/get-started>`. Since Anaconda is platform independent and does not require elevated permissions, it is the perfect way to try out Veros without too much hassle.
However, **in high-performance contexts, we advise against using Anaconda**. Getting optimal performance requires a software stacked that is linked to the correct system libraries, in particular MPI (see also :doc:`/introduction/advanced-installation`). This requires that Python packages that depend on C libraries (such as ``mpi4py``, ``mpi4jax``, ``petsc4py``) are built from source, e.g. via ``pip install --no-binary``.
Usage
+++++
Your cluster's scheduling manager needs to be told exactly how it should run our model, which is usually being done by writing a batch script that prepares the environment and states which resources to request. The exact set-up of such a script will vary depending on the scheduling manager running on your cluster, and how exactly you chose to install Veros. One possible way to write such a batch script for the scheduling manager SLURM is presented here:
.. literalinclude:: /_downloads/veros_batch.sh
:language: bash
which is :download:`saved as veros_batch.sh </_downloads/veros_batch.sh>` in the model setup folder and called using ``sbatch``.
This script makes use of the ``veros resubmit`` command and its ``--callback`` option to create a script that automatically re-runs itself in a new process after each successful run (see also :doc:`/reference/cli`). Upon execution, a job is created on one node, using 16 processors in one process, that runs the Veros setup located in :file:`my_setup.py` a total of eight times for 90 days (7776000 seconds) each, with identifier ``my_run``. Note that the ``--callback "sbatch veros_batch.sh"`` part of the command is needed to actually create a new job after every run, to prevent the script from being killed after a timeout.
Making changes in Veros
=======================
Code conventions
----------------
When contributing to Veros, please adhere to the following general guidelines:
- Your first guide should be the surrounding Veros code. Look around, and be consistent with your modifications.
- Unless you have a very good reason not to do so, please stick to `the PEP8 style guide <https://www.python.org/dev/peps/pep-0008/>`_ throughout your code. One exception we make in Veros is in regard to the maximum line length - since numerical operations can take up quite a lot of horizontal space, you may use longer lines if it increases readability.
- In particular, please follow the PEP8 naming conventions, and use meaningful, telling names for your variables, functions, and classes. The variable name :data:`stretching_factor` is infinitely more meaningful than :data:`k`. This is especially important for settings and generic helper functions.
- Document your functions using `Google-style docstrings <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_. This is especially important if you are implementing a user-facing API (such as a diagnostic, a setup, or tools that are meant to be called from setups).
- We use ``flake8`` for linting and ``black`` for code formatting. We automatically validate all changes to Veros through a pre-commit hook. To install it, run::
$ pip install pre-commit
$ pre-commit install
After this, black and flake8 will run automatically on every commit.
Distributed memory support
--------------------------
By default, all core routines should support distributed execution via MPI.
In this case, every processor only operates on a chunk of the total data.
By using :py:func:`veros.variables.allocate`, you can make sure that allocated data always has the right shape.
Since none of the processes have access to the global data, you need to take special care during reductions (e.g. ``sum``) and accumulations (e.g. ``cumsum``) along horizontal dimensions.
Use functions from :mod:`veros.distributed` (e.g. :func:`veros.distributed.global_max`) where appropriate.
The dist_safe keyword
+++++++++++++++++++++
If you are not comfortable writing code that is safe for distributed execution, you can use the ``dist_safe`` keyword to :func:`veros.decorators.veros_routine`:::
@veros_routine(dist_safe=False, local_variables=["temp"])
def my_function(state):
# this function is now guaranteed to be executed on the main process
vs = state.variables
# since temp is declared as a local variable, we have access to all of the data
vs.temp = update(vs.temp, at[2:-2, 2:-2], np.max(vs.temp))
# this would throw an error, since salt is not in local_variables
# vs.salt = vs.salt * 0
# after execution, the updated contents of vs.temp are scattered to all processes,
# and distributed execution continues
When encountering a ``veros_routine`` that is marked as not safe for distributed execution (``dist_safe=False``), Veros gathers all relevant data from the worker processes,
copies it to the main process, and executes the function there.
This ensures that you can write your code exactly as in the non-distributed case (but it comes with a performance penalty).
Running tests and benchmarks
----------------------------
If you want to make sure that your changes did not break anything, you should run our test suite that compares the results of each subroutine to pyOM2.
To do that, you will need to compile the Python interface of pyOM2 on your machine, and then point the testing suite to the library location, e.g. through::
$ pytest -v . --pyom2-lib /path/to/pyOM2/py_src/pyOM_code.so
from the main folder of the Veros repository.
If you deliberately introduced breaking changes, you can disable them during testing by prefixing them with::
from veros import runtime_settings
if not runtime_settings.pyom_compatibility_mode:
# your changes
Veros also provides automated benchmarks in a similar fashion. The benchmarks run some dummy problems with varying problem sizes and all available computational backends: ``numpy``, ``numpy-mpi``, ``jax``, ``jax-mpi``, ``jax-gpu``, ``fortran`` (pyOM2), and ``fortran-mpi`` (parallel pyOM2). For options and further information run::
$ python run_benchmarks.py --help
from the repository root.
Performance tweaks
------------------
If your changes to Veros turn out to have a negative effect on the runtime of the model, there several ways to investigate and solve performance problems:
- Run your model with the ``-v debug``, ``-v trace``, and / or ``--profile-mode`` options to get additional debugging output (such as timings for each time step, and a timing summary after the run has finished).
- You should try and avoid explicit loops over arrays at all cost (but if you have to, you can use :func:`veros.core.operators.for_loop`, which is reasonably efficient in JAX). You should always try to work on the whole array at once.
- If you are still having trouble, don't hesitate to ask for help (e.g. `on GitHub <https://github.com/team-ocean/veros/issues>`_).
Running Veros on ERDA
=====================
ERDA
----
The Electronic Research Data Archive (`ERDA <https://www.erda.dk>`__) at the University of Copenhagen (`KU/UCPH <https://www.ku.dk/english/>`__) is meant for storing, sharing, analyzing and archiving research data.
ERDA delivers safe central storage space for private and shared files, interactive analysis tools, and data archiving for safe-keeping and publishing.
.. _erda-jupyter:
Getting started with ERDA's Jupyter server
------------------------------------------
ERDA integrates a set of `Jupyter <https://jupyter.org>`__ services, which can be used to easily perform a wide range of data analysis and visualization tasks directly on your ERDA data.
The system relies on the `JupyterLab <https://jupyterlab.readthedocs.io/en/stable/>`__ web interface to provide interactive Python notebooks or Linux command line access (Terminal) with direct and efficient access to your ERDA home directory.
To get access to these services, ERDA provides a Jupyter button in the navigation menu.
.. figure:: /_images/erda/erda_welcome.png
:width: 100%
:align: center
ERDA navigation menu.
Upon clicking it, the page to **Select a Jupyter Service** appears.
On this page you are presented with a set of horizontal service tabs at the top, and each tab presents and describes the individual service and how it is configured in the **Service Description**.
.. note::
ERDA offers 2 services, DAG and MODI. MODI offers more powerful hardware, but you have to use a scheduling system to use it (:ref:`see below <modi>`). If you are unsure what to use, you should start with DAG.
Below the description there is a **Start SERVICE** button, which you can click to open a connection to that particular service in a new web browser tab or window.
.. figure:: /_images/erda/erda_dag_spawn.png
:width: 100%
:align: center
Select a Jupyter Service menu.
By default, it will take you to your personal home page on the **Jupyter service** as shown below, which is provided via a hosted version of JupyterHub.
That is, the standard infrastructure to provide individual isolated Jupyter notebook containers to multiple users sharing a pool of actual compute nodes.
.. figure:: /_images/erda/erda_jservice_homepage.png
:width: 100%
:align: center
Top fragment of Jupyter service home page.
After clicking **Start My Server**, the site will give you an option to chose which notebook image you want to spawn.
Select **HPC Notebook** as shown below and press the **Start** button.
.. figure:: /_images/erda/erda_dag_image.png
:width: 100%
:align: center
Top fragment of Jupyter service home page with selected HPC Notebook image.
This will spawn the **HPC Notebook** image and redirect you straight to the JupyterLab interface as shown below.
The JupyterLab interface is the same in all available Services (DAG and MODI).
.. figure:: /_images/erda/erda_dag_terminal.png
:width: 100%
:align: center
JupyterLab interface on DAG.
Follow the Veros installation instructions below with respect to the selected service.
Data Analysis Gateway (DAG)
+++++++++++++++++++++++++++
In order to install Veros on a DAG instance do the following after launching the **Terminal**:
1. Clone the Veros repository:
.. exec::
from veros import __version__ as veros_version
if "0+untagged" in veros_version:
veros_version = "main"
else:
veros_version = f"v{veros_version}"
if "+" in veros_version:
veros_version, _ = veros_version.split("+")
print(".. code-block::\n")
print(" $ cd ~/modi_mount")
print(f" $ git clone https://github.com/team-ocean/veros.git -b {veros_version}")
(or `any other version of Veros <https://github.com/team-ocean/veros/releases>`__).
2. Change the current directory to the Veros root directory::
$ cd veros
3. Create a new conda environment for Veros, and install all relevant dependencies by running::
$ conda env create -f conda-environment.yml
4. To use Veros, activate your new conda environment via::
$ conda activate veros
5. Make a folder for your Veros setups, and switch to it::
$ mkdir ~/vs-setups
$ cd ~/vs-setups
6. Copy the :doc:`global 4deg </reference/setups/4deg>` model template from the :doc:`setup gallery </reference/setup-gallery>`::
$ veros copy-setup global_4deg
7. Change the current directory to the setup directory::
$ cd global_4deg/
.. _erda-jupyter-editor:
8. One can modify model parameters with the **JupyterLab editor**. To do that you need to navigate to your setup directory in the JupyterLab file browser (left panel) of the **JupyterLab interface** and double-click the :file:`global_4deg.py` file (circled in red) as in the figure below
.. figure:: /_images/erda/erda_dag_edit_file.png
:width: 100%
:align: center
JupyterLab editor on DAG.
Press :command:`CTRL+S` (:command:`CMD+S` on MacOS) on a keyboard to save your changes and close the file by pressing the cross button (circled in red).
9. Run the model in serial mode on one CPU core::
$ veros run global_4deg.py
10. In case you want to run Veros in parallel mode, you need to reinstall the HDF5 library with parallel I/O support::
$ conda install "h5py=*=mpi_mpich*" --force-reinstall
11. To run the model in parallel mode on 4 CPU cores execute::
$ mpirun -np 4 veros run global_4deg.py -n 2 2
.. _modi:
MPI Oriented Development and Investigation (MODI)
+++++++++++++++++++++++++++++++++++++++++++++++++
In order to install Veros with the `veros-bgc biogeochemistry plugin <https://veros-bgc.readthedocs.io/en/latest/>`__ start an **Ocean HPC Notebook** from the **Jupyter service** home page following :ref:`the instructions above <erda-jupyter>`.
1. Launch the **Terminal**, change your current directory to ~/modi_mount and clone the Veros repository:
.. exec::
from veros import __version__ as veros_version
if "0+untagged" in veros_version:
veros_version = "main"
else:
veros_version = f"v{veros_version}"
if "+" in veros_version:
veros_version, _ = veros_version.split("+")
print(".. code-block::\n")
print(" $ cd ~/modi_mount")
print(f" $ git clone https://github.com/team-ocean/veros.git -b {veros_version}")
2. Create a new conda environment for Veros::
$ conda create --prefix ~/modi_mount/conda-env-veros -y python=3.11
3. To use the new environment, activate it via::
$ conda activate ~/modi_mount/conda-env-veros
4. Install Veros, its biogeochemistry plugin and all relevant dependencies by running::
$ pip3 install ./veros
$ pip3 install veros-bgc
5. Copy the ``bgc_global_4deg`` model template from the `setup gallery <https://veros-bgc.readthedocs.io/en/latest/reference/setup-gallery.html>`__::
$ veros copy-setup bgc_global_4deg
6. Change your current directory in the JupyterLab file browser (left panel) of the **JupyterLab interface** to ~/modi_mount by double-clicking the modi_mount folder (circled in red).
.. figure:: /_images/erda/erda_modi_terminal.png
:width: 100%
:align: center
JupyterLab interface on MODI.
7. Download the :download:`modi_veros_batch.sh </_downloads/modi_veros_batch.sh>` and :download:`modi_veros_run.sh </_downloads/modi_veros_run.sh>` scripts on your PC/Laptop and upload them to MODI (press circled in red arrow button as on the figure above).
8. Navigate to your setup directory in the JupyterLab file browser and modify (if needed) the model parameters in the :file:`bgc_global_four_degree.py` file with the **JupyterLab editor** following :ref:`the instructions above <erda-jupyter-editor>`.
9. To run your BGC setup submit a job to MODI's `Slurm <https://slurm.schedmd.com/quickstart.html>`__ queue::
$ sbatch ./modi_veros_batch.sh ~/modi_mount/bgc_global_4deg/bgc_global_four_degree.py
.. note::
It's particularly important to run ``sbatch`` commands from the ~/modi_mount directory for jobs to succeed.
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
There are a couple of basic Slurm commands that can be used to get an overview of the MODI cluster and manage your jobs, such as:
**sinfo** outputs the available partitions (modi_devel, modi_short, modi_long), their current availability (e.g. up or down), the maximum time a job can run before it is automatically terminated, the number of associated nodes and their individual state ::
$ spj483_ku_dk@848874c4e509:~$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
modi_devel* up 15:00 1 mix modi004
modi_devel* up 15:00 7 idle modi[000-003,005-007]
modi_short up 2-00:00:00 1 mix modi004
modi_short up 2-00:00:00 7 idle modi[000-003,005-007]
modi_long up 7-00:00:00 1 mix modi004
modi_long up 7-00:00:00 7 idle modi[000-003,005-007]
**sbatch** is used to submit a job (batch) script for later execution. The script will typically contain one or more srun commands to launch parallel tasks ::
$ spj483_ku_dk@848874c4e509:~/modi_mount$ sbatch submit.sh
Submitted batch job 10030
where 10030 is {JOBID}.
**squeue** shows queued jobs and their status, e.g. pending (PD) or running (R) ::
$ spj483_ku_dk@848874c4e509:~$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
10030 modi_shor veros_bg spj483_k R 0:09 1 modi005
**scancel** cancels job allocation to release a node ::
$ scancel 10030
# 模型唯一标识
modelCode = 740
# 模型名称
modelName=veros_jax
# 模型描述
modelDescription=海洋模拟
# 应用场景
appScenario=推理,海洋模拟,气象,能源,海洋
# 框架类型
frameType=jax
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment