The mesoscale Hydrological Model
No Matches
Getting Started

For compile and installation instructions see: https://mhm-ufz.org/guides/.

Test mHM on Example domain

The mHM distribution usually comes with two example test domains located in


The directory provides input data for the test basin and output examples. Detailled information about this test basin can be found in the chapter The details about the test basin. All parameters and paths are already set by default to the test case, so you can just start the simulation with the command

Python bindings of mHM.
Definition __init__.py:1

This will run mHM on two basins simultanously and create output files for discharge and interception in test/output_b*. The chapter Visualizing Model Output provides further information on visualising mHM results.

Run your own Simulation

Pretty much the first step mHM takes during runtime is reading the three configuration files:

  • mhm.nml
  • mhm_output.nml
  • mhm_parameters.nml

When editing these files, we recommend to use syntax highlighting for Fortran, which is featured in emacs, for example. This will also guarantee that the file ends with a new line character that is required by the fortran language standard. If you are writing these files with external programs, make sure that the written files comply with Fortran language standard, in particular make sure that the file ends with a new line.

Main Configuration: Paths, Periods, Switches

The file mhm.nml contains the main configuration for running mHM in your catchments. Since the comments should explain each single setting, this section will only roughly describe the structure of the file. By the way, paths can be relative, absolute and even symbolic links.

  • Common Settings: Defines output path, input look-up tables and input data format (all "nc" or all "bin") for all basins in your simulation.
  • Basin wise paths: Set paths for input and output. Create a block for each basin. Remove needless blocks.
  • Resolution: Hourly or Daily time step. Hydrologic resolution should be factorisable with the input resolutions (e.g. meteo: 4000, hydro: 2000, morph: 100). The routing resolutions determines the velocity of water from cell to cell (keep it greater than the hydro resolution). If you change the routing, remember to recalibrate the model (see Calibration and Optimization).
  • Restart: mHM does provide you the option to save the whole model configuration (incl. states, fluxes, routing network, and model parameters) at the end of the simulation period. mHM is then able to restart a new simulation with this model configuration. This reduces the amount of computational time in the newly started simulation because mHM does not have to re-setup the model configuration (e.g., parameter fields and routing network).
  • Periods: Your actual period of interest should be subsequent of a warming period, where model dynamics can set in properly. Remember to expand your input data by that period, too.
  • Soil Layers: Soil parameters (not properties) are upscaled vertically in mHM. In this section you specify the number of horizons and depths for the hydrological processing. (This for example you do not find in any other model)
  • Land Cover: You can provide different land cover files for different years.
  • LAI data: Switch for choosing LAI option. The user can either select to run mHM with monthly look-up-table LAI values defined for 10 land classes or choose to run mHM using gridded LAI input data (e.g., MODIS). Gridded LAI data must be provided on a daily time-step at the Level-0 resolution.
  • Process Switches:
    • Proccess case 5 - potential evapotranspiration (PET):
      1. PET is input (processCase(5)=0)
      2. PET after Hargreaves-Samani (processCase(5)=1)
      3. PET after Priestley-Taylor (processCase(5)=2)
      4. PET after Penman-Monteith (processCase(5)=3)
    • Proccess case 8 - routing can be activated (=1) or deactivated (=0).
  • Annual Cycles: Values for pan evaporation hold in impervious regions only. The meteorological forcing table disaggregates daily input data to hourly values.

Output Configuration: Time Steps, States, Fluxes

The file mhm_output.nml regulates how often (e.g. timeStep_model_outputs = 24) and which variables (fluxes and states) should be written to the final netcdf file [OUTPUT_DIRECTORY]/mHM_Fluxes_States.nc. We recommend to only switch on variables that are of actual interest, because the file size will greatly increase with the number of containing variables. During calibration (see Calibration and Optimization) no output file will be written.

Regionalised Parameters: Initial Values and Ranges

The file mhm_parameters.nml contains all global parameters and their initial values. They have been determined by calibration in German basins and seem to be transferabel to other catchments. If you come up with a very different catchment or routing resolution, these parameters should be recalibrated (see section Calibration and Optimization).

Calibration and Optimization

By default, mHM runs without caring about observed discharge data. It will use default values of the global regionalised parameters \(\gamma\), defined in mhm_parameters.nml . In order to fit discharge to observed data, mHM has to be recalibrated. This will optimise the \(\gamma\) parameters such that mHM arrives at a best fit for discharge. The optimization procedure runs mHM many many times, sampling parameters in their given ranges for each iteration, until the objective function converges to a confident best fit.

The Optimization Routines

mHM comes with four available optimization methods:

  • MCMC: The Monte Carlo Markov Chain sampling of parameter sets is recommended for estimation of parameter uncertainties. Intermediate results are written to mcmc_tmp_parasets.nc .
  • DDS: The Dynamically Dimensioned Search is an optimization routine known improve the objective within a small number of iterations. However, the result of DDS is not neccessarily close to your global optimum. Intermediate results are written to dds_results.out .
  • Simulated Annealing: Simulated Annealing is a global optimization algorithm. SA is known to require a large number of iterations before convergence (e.g. 100 times more than DDS), but finds parameter sets closer to the global minimum like DDS. Intermediate results are written to anneal_results.out .
  • SCE: The Shuffled Complex Evolution is a global optimization algorithm which is based on the shuffling of parameter complexes. It needs more iterations compared to the DDS (e.g. 20 times more), but less compared to Simulated Annealing. The increasing computational effort (i.e. iterations) leads to more reliable estimation of the global optimum compared to DDS. Intermediate results are written to sce_results.out .

Objective functions currently implemented in mHM are:

  • NSE: Nash-Sutcliffe Efficiency, assuming constant + linearly relative errors. Recommended for fitting high flows.
  • lnNSE: logarithmic Nash-Sutcliffe Efficiency, recommended for fitting low flows.
  • 0.5*(NSE+lnNSE): weights both NSE and lnNSE by 50%, roughly fits high and low flows.
  • Likelihood: The confidence bands are probability density functions that capture variable errors, recommended for hydrological discharge.
  • KGE: Kling Gupta model efficiency: combined measure for variability, bias and correlation
  • PD: Pattern dissimilarity (PD) of spatially distributed soil moisture
  • ETC: Combination of other multi-objective functions (streamflow, TWS anomaly, evapotranspiration, soil moisture), see mhm.nml for details

Calibration Settings for mHM

The following settings in mhm.nml are required for calibrating mHM:

  • optimize = .true.
  • opti_method = 1
    Choose methods: 0 (MCMC), 1 (DDS), 2 (SA), 3 (SCE)
  • opti_function = 1
    Choose objective functions: 1 (NSE), 2 (lnNSE), 3 (50% NSE+lnNSE), 4 (Likelihood)
  • nIterations = 40000
    Maximum number of iterations (mHM runs), optimisers will exit earlier if convergence criteria are reached.
  • seed = -9
    The default value -9 will take a seed number from the system clock. Be warned that simulations might not be random if you define a positive seed here.

More specific settings are offered at the very end of the file mhm.nml.

In mhm_parameters.nml you find the initial values and ranges from which the optimiser will sample parameters. Most ranges are very sensitive and have been determined by detailed sensitivity analysis, so we do not recommend to change them. With the FLAG column you may choose which parameters are allowed to be optimised. The parameter gain_loss_GWreservoir_karstic is not meant to be optimised.

Please mind that optimization runs will take long and may demand a huge amount of hardware ressources. We recommend to submit those jobs to a cluster computing system.

Final Calibration Results

During calibration, mHM does not write out fluxes, states and discharge, so you need to perform a final run with the calibrated parameters. When the simulations are finished, the optimal parameter set is written to


You can run mHM directly with the generated namelist (by renaming it) or incorporate the results in FinalParams.out as new initial values into mhm_parameters.nml. There are two scripts that help you with that:

  • pre-proc/create_multiple_mhm_parameter_nml.sh
    Useful for many optimisation runs (many lines in FinalParams.out)
  • post-proc/opt2nml-params.pl
    Useful to analyze where the optimisation arrived. Using two arguments, pathto/FinalParams.out and pathto/mhm_parameters.nml, it will (1) create a new mhm_parameters.nml.opt filled with the new values and (2) directly show the new parameters within their ranges and warn if some have been close to their end of range by 1%.

As soon as the new parameters are set, deactivate the optimize switch in mhm.nml and rerun mHM once in order to obtain the final optimised output.

You should also have a look at the parameter evolution (e.g. sce_results.out) or final results. If any of the parameters stick to the very end of their allowed range, the result is not optimal and you will be in serious trouble. Possible reasons might be bad parameter ranges (even though they have been optimised mathematically, but in a basin or resoluion not comparable to yours) or bad input data.