Modelling frameworks


In addition to spatial functions a support frame is required to build spatio-temporal models. This section introduces two frameworks that ease the development of static and dynamic models.

The script below is the Python version of the hydrological runoff model shown in the demo of the PCRaster distribution. To run the script change to the demo directory (demo/deterministic) and execute

 1#!/usr/bin/env python
 2# -*- coding: utf-8 -*-
 4# model for simulation of runoff
 5# 24 timesteps of 6 hours => modelling time one week
 7from pcraster import *
 8from pcraster.framework import *
10class RunoffModel(DynamicModel):
11  def __init__(self, cloneMap):
12    DynamicModel.__init__(self)
13    setclone(cloneMap)
15  def initial(self):
16    # coverage of meteorological stations for the whole area
17    self.rainZones = spreadzone("", scalar(0), scalar(1))
19    # create an infiltration capacity map (mm/6 hours), based on the
20    # soil map
21    self.infiltrationCapacity = lookupscalar("infilcap.tbl", "")
22, "infilcap")
24    # generate the local drain direction map on basis of the elevation map
25    self.ldd = lddcreate("", 1e31, 1e31, 1e31, 1e31)
26, "ldd")
28    # initialise timeoutput
29    self.runoffTss = TimeoutputTimeseries("runoff", self, "", noHeader=False)
31  def dynamic(self):
32    # calculate and report maps with rainfall at each timestep (mm/6 hours)
33    surfaceWater = timeinputscalar("rain.tss", self.rainZones)
34, "rainfall")
36    # compute both runoff and actual infiltration
37    runoff = accuthresholdflux(self.ldd, surfaceWater,\
38         self.infiltrationCapacity)
39    infiltration = accuthresholdstate(self.ldd, surfaceWater,\
40         self.infiltrationCapacity)
42    # output runoff, converted to m3/s, at each timestep
43    logRunOff = runoff / scalar(216000)
44, "logrunof")
45    # sampling timeseries for given locations
46    self.runoffTss.sample(logRunOff)
48myModel = RunoffModel("")
49dynModelFw = DynamicFramework(myModel, lastTimeStep=28, firstTimestep=1)

Deterministic Modelling

Static Modelling Framework

This section introduces to the usage of the static modelling framework. A static model is described by:

\[Z = f(Z, I, P)\]

with \(Z\), the model state variables; \(I\), inputs; \(P\), parameter; and \(f\) defining the model structure. The static model framework is used to build models without temporal dependencies, like calculating distances between a number of gauging stations.

Static model template

The following script shows the minimal user class that fulfills the requirements for the static framework:

from pcraster.framework import *

class UserModel(StaticModel):
  def __init__(self):

  def initial(self):

In the class of the user model the following method must be implemented:


This method contains the static section of the user model.

The model class can be executed with the static framework as follows:


import userModel
from pcraster.framework import *

myModel = userModel.UserModel()
staticModel = StaticFramework(myModel)

To run the model execute


The script creates an instance of the user model which is passed to the static framework afterwards. executes the initial section of the user model.


The following example shows the static version of the demo script. PCRaster operations can be used in the same way as in scripts without the modelling framework:

 1#!/usr/bin/env python
 2# -*- coding: utf-8 -*-
 4# static model
 6from pcraster import *
 7from pcraster.framework import *
 9class RunoffModel(StaticModel):
10  def __init__(self, cloneMap):
11    StaticModel.__init__(self)
12    setclone(cloneMap)
14  def initial(self):
15    # coverage of meteorological stations for the whole area
16    self.rainZones = spreadzone("", scalar(0), scalar(1))
18    # create an infiltration capacity map (mm/6 hours), based on the
19    # soil map
20    self.infiltrationCapacity = lookupscalar("infilcap.tbl", "")
21, "infilcap")
23    # generate the local drain direction map on basis of the elevation map
24    self.ldd = lddcreate("", 1e31, 1e31, 1e31, 1e31)
25, "ldd")
27myModel = RunoffModel("")
28stModelFw = StaticFramework(myModel)

Setting the map attributes (e.g. number of rows and columns and cellsize) is done by using setclone in the constructor of the model class.

PCRaster operations can have data from disk as input arguments, as is done in the spreadzone operation.

Note that the framework provides an additional report operation ( whose behavior is dependent on the method in which it is used. It writes the data to disk with a filename conforming to the PCRaster conventions generated from the second argument, i.e. appending a “.map” suffix for the static framework (the name of the local drain direction map will become “”) and appending a time step when used in the dynamic framework. Storing data with a specific name or at a specific location is done using report instead of

Dynamic Modelling Framework

This section describes the usage of the dynamic modelling framework. In addition to spatial processes dynamic models include a temporal component. Simulating dynamic behaviour is done by iterating a dynamic section over a set of timesteps.

The state of a model variable at time \(t\) is defined by its state at \(t-1\) and a function \(f\) (Karssenberg2005a):

\[Z_{1..m}(t) = f(Z_{1..m}(t-1), I_{1..n}(t), P_{1..l})\]

The model state variables \(Z_{1..m}\) belong to coupled processes and have feedback in time. \(I_{1..n}\) denote the inputs to the model, \(P_{1..l}\) are model parameters, and \(f\) transfers the model state from time step \(t-1\) to \(t\).

The dynamic modelling framework executes \(f\) using the following scheme (in pseudo code):


for each timestep:

Dynamic model template

The following script shows the minimal user class that fulfills the requirements for the dynamic framework:

from pcraster.framework import *

class UserModel(DynamicModel):
 def __init__(self):

 def initial(self):

 def dynamic(self):

In the class of the user model the following methods must be implemented:


This method contains the code to initialise variables used in the model.


This method contains the implementation of the dynamic section of the user model.

Applying the model to the dynamic framework is done by:


import userModel
from pcraster.framework import *

myModel = userModel.UserModel()
dynModel = DynamicFramework(myModel, 50)

To run the model execute


The script creates an instance of the user model which is passed to the dynamic framework. The number of time steps is given as second argument to the framework constructor.


A script for a dynamic model is given in the quick start section. The model contains two main sections:

The initial section contains operations to initialise the state of the model at time step 0. Operations included in this section are executed once.

The dynamic section contains the operations that are executed consecutively each time step. Results of a previous time step can be used as input for the current time step. The dynamic section is executed a specified number of timesteps: 28 times in the demo script.

The initial section of the demo script is the same as in the static version. The dynamic section holds the operations for in- and output with temporal dependencies and the model processes. For time series input data the timeinputscalar assigns precipitation data for each time step to the surfaceWater variable. In the case that rain0000.001 to rain0000.028 hold the rainfall for each timestep instead you can replace the timeinputscalar operation by surfaceWater = self.readmap(“rain”).

Output data is now reported as a stack of maps to disk. The function will store the runoff with filenames logrunof.001 up to logrunof.028.

For additional operations that can be used for example in conditional expressions like self.currentTimestep we refer to the code reference for this topic.

Stochastic Modelling and data assimilation

In the case that a model includes probabilistic rules or inputs variables and parameters that are given as spatial probability distributions, the model becomes stochastic (Karssenberg2005b. The aim of stochastic modelling is to derive the probability distributions, which is done in the framework by Monte Carlo simulation.

The framework provides three different methods to support stochastic modelling and data assimilation: Monte Carlo simulation (e.g. Doucet2000, Doucet2001), particle filter (e.g. Xiong2006, Weerts2006, Arulampalam2002) and the Ensemble Kalman filter (e.g. Evensen1994, Simon2006).

Monte Carlo simulations

Monte Carlo simulations solve for a large number of samples the function \(f\) and compute statistics on the ensemble results. The framework supports this scheme by executing the following methods (in pseudo code):


for each sample:
  if dynamic model:
    for each timestep:


The following additional methods must be implemented to use the framework in Monte Carlo mode:


The premcloop can be used to calculate input parameters or variables that are both constant and deterministic. It is executed once at the beginning of the model run, the calculate variables can be used in all samples and time steps.


The postmcloop is executed after the last sample run is finished. It is used to calculate statistics of the ensemble, like variance or quantiles.

The initial and dynamic sections (the latter in case of a dynamic model for each time step) are executed for each Monte Carlo sample.

The framework generates samples directories named 1, 2, 3,…, N, with N the number of Monte Carlo samples. The methods self.readmap() and now read and store the data to and from the corresponding sample directory.

Static models

The Python script below shows a static model which is executed within the Monte Carlo framework. The model simulates vegetation growth; 100 realisations are executed (Karssenberg2005b).

 1from pcraster import *
 2from pcraster.framework import *
 4class VegetationGrowthModel(StaticModel, MonteCarloModel):
 5  def __init__(self):
 6    StaticModel.__init__(self)
 7    MonteCarloModel.__init__(self)
 8    setclone("")
10  def premcloop(self):
11    pass
13  def initial(self):
14    # spreading time for peat (years)
15    peatYears = 0.1 + mapnormal() * 0.001
16    # spreading time for other soil types (years)
17    otherYears = 0.5 + mapnormal() * 0.02
18    # number of years needed to move the vegetation front 1 m
19    years = ifthenelse("", peatYears, otherYears)
20    # time to colonization (yr)
21    colTime = spread("", years)
22    # colonized after 50 years?
23    col = ifthen(colTime < 50)
24, "col")
26  def postmcloop(self):
27    names = ["col"]
28    mcaveragevariance(names, "", "")
30myModel = VegetationGrowthModel()
31staticModel = StaticFramework(myModel)
32mcModel = MonteCarloFramework(staticModel, 100)

First, maps are created containing for each cell the time (years) needed for the plant to spread 1 m. The value and the error associated with this input parameter depend on the soil type. By using the function mapnormal each sample will generate an independent realisation of the input parameter peatYears and otherYears. The information is used to calculate a total spreading time map from the locations occupied with the plant on Finally a Boolean map is generated containing all cells colonized within 50 years.

Dynamic models

The Python script below shows a dynamic model which is executed within the Monte Carlo framework. The model simulates snow thickness and discharge for 180 time steps (Karssenberg2009). A number of 10 realisations is executed.

 1#!/usr/bin/env python
 2# -*- coding: utf-8 -*-
 4from pcraster import *
 5from pcraster.framework import *
 7class SnowModel(DynamicModel, MonteCarloModel):
 8  def __init__(self):
 9    DynamicModel.__init__(self)
10    MonteCarloModel.__init__(self)
11    setclone("")
13  def premcloop(self):
14    dem = self.readmap("dem")
15    self.ldd = lddcreate(dem, 1e31, 1e31, 1e31, 1e31)
16    elevationMeteoStation = scalar(2058.1)
17    self.elevationAboveMeteoStation = dem - elevationMeteoStation
18    self.degreeDayFactor = 0.01
20  def initial(self):
21    self.snow = scalar(0)
22    self.temperatureLapseRate = 0.005 + (mapnormal() * 0.001)
23, "lapse")
24    self.temperatureCorrection = self.elevationAboveMeteoStation\
25         * self.temperatureLapseRate
27  def dynamic(self):
28    temperatureObserved = self.readDeterministic("tavgo")
29    precipitationObserved = self.readDeterministic("pr")
30    precipitation = max(0, precipitationObserved * (mapnormal() * 0.2 + 1.0))
31    temperature = temperatureObserved - self.temperatureCorrection
32    snowFall = ifthenelse(temperature < 0, precipitation, 0)
33    self.snow = self.snow + snowFall
34    potentialMelt = ifthenelse(temperature > 0, temperature\
35         * self.degreeDayFactor, 0)
36    actualMelt = min(self.snow, potentialMelt)
37    self.snow = max(0, self.snow - actualMelt)
38    rain = ifthenelse(temperature >= 0, precipitation, 0)
39    discharge = accuflux(self.ldd, actualMelt + rain)
40, "s")
41, "q")
43  def postmcloop(self):
44    names = ["s", "q"]
45    mcaveragevariance(names, self.sampleNumbers(), self.timeSteps())
46    percentiles = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
47    mcpercentiles(names, percentiles, self.sampleNumbers(), self.timeSteps())
49myModel = SnowModel()
50dynamicModel = DynamicFramework(myModel, lastTimeStep=180, firstTimestep=1)
51mcModel = MonteCarloFramework(dynamicModel, nrSamples=10)

To run the model execute


In the premcloop, the local drain direction map is created from the digital elevation map As the local drain direction map is used in the initial and dynamic section later on it is defined as a member variable of the snowModel class. If maps are reported in the premcloop they are written into the current working directory.

The initial section initialises for each realisation the state variables snow with an original value of zero. Also a realisation of the lapse rate is made from a probability distribution \(0.005 + norm(0, 0.001)\). Each lapse rate is stored with in the corresponding sample subdirectory.

The dynamic section calculates the snow height and the discharge. The temperature and precipitation values are obtained from disk with the self.readDeterministic operation. Random noise is added to the deterministic precipitation values in order to create independent realisations. The precipitation is diminished by the potential snowfall, which builds up the snow depth. Runoff and snowmelt compose the discharge in the catchment.

The postmcloop calculates statistics over all ensemble members. For snow and discharge, average and variance values are calculated. Furthermore the percentiles are calculated for both variables. The resulting maps of the postmcloop calculations are written to the current working directory.

Particle filter

The particle filter is a method to improve the model predictions. Observed values are hereby used at specific time steps to determine the best performing samples. The prediction performance of the ensemble is improved by continuing better performing and omitting bad samples.

The particle filter approximates the posterior probability density function from the weights of the Monte Carlo samples Karssenberg2009:

\[p(x_t \mid Y_t) \approx \sum_{n=1}^N p_t^{(n)} \delta (x_t - x_t^{(n)})\]

with \(\delta\) the Dirac delta function, \(Y_t\) the past and current observations at time \(t\) and \(x_t\) a vector of model components for which observation are available.

For Gaussian measurement error the weights are proportional to Simon2006:

(1)\[a_t^{(n)} = exp(-[y_t - h_t(x_t^{(n)})]^T R_t^{-1} [y_t - h_t(x_t^{(n)})] / 2)\]

with \(R_t\) the covariance matrix of the measurement error and \(h_t\) the measurement operator.

The weight of a sample is calculated by a normalisation of \(a_t^{(n)}\):

\[p_t^{(n)} = a_t^{(n)} / \sum_{j=1}^N a_t^{(j)}\]

The particle filter framework executes the following methods using the scheme:


for each filter period:
  for each sample:
    if first period:
    for each timestep in filter period:
    if not last filter period:


The following additional methods must be implemented to use the framework:


The suspend section is executed after the time step precedent to a a filter timestep and used to store the state variables. This can be achieved with the method.


The resume section is executed before the first time step of a filter period and intended to re-initialise the model after a filter time step. State variables can be obtained with the self.readmap() method.


The updateWeight method is executed at the filter moment and used to retrieve the weight of each sample. The method must return a single floating point value (i.e. the \(a_t^{(n)}\)).

Like in the Monte Carlo framework each sample output will be stored into a corresponding sample directory. Each sample directory contains a stateVar subdirectory that is used to store the state variables of the model. State variables not holding PCRaster data types must be stored into this directory by the user.

Two different algorithms are implemented in the filter framework. Sequential Importance Resampling and Residual Resampling (see e.g. Weerts2006) can be chosen as selection scheme by using the adequate framework class. In Sequential Importance Resampling, a cumulative distribution function is constructed from the sample weights \(p_t^{(n)}\). From this distribution N samples are drawn with replacement from a uniform distribution between 0 and 1.

In Residual Resampling, in the first step samples are cloned a number of times equal to \(k_t^{(n)} = floor(p_t^{(n)}N)\) with N number of samples and \(floor\) an operation rounding the value to the nearest integer. In a second step, the residual weights \(r_t^{(n)}\) are calculated according to:

\[r_t^{(n)} = {{p_t^{(n)}N - k_t^{(n)}} \over {N - \sum_{n=1}^N k_t^{(n)}}}\]

and used to construct a cumulative distribution function. From this distribution a number of additional samples is drawn until a number of N samples is reached Karssenberg2009.

For each filter time step a comma separated file holding sample statistics is written to the current working directory. For each sample it contains its normalised weight, the cumulative weight up to that sample and the number of clones for that sample. A zero indicates that the sample is not continued. Furthermore a graphviz input file holding the sample choice is generated.


The script below shows a dynamic model which is executed within the particle filter framework. The overall runtime of the model still accounts to 180 time steps, 10 realisations are executed. As three filter moments are chosen at the timesteps 70, 100 and 150 four periods in total are executed: from timestep 1-70, 71-100, 101-150 and 151-180.

 1#!/usr/bin/env python
 2# -*- coding: utf-8 -*-
 4from pcraster import *
 5from pcraster.framework import *
 7class SnowModel(DynamicModel, MonteCarloModel, ParticleFilterModel):
 8  def __init__(self):
 9    DynamicModel.__init__(self)
10    MonteCarloModel.__init__(self)
11    ParticleFilterModel.__init__(self)
12    setclone("")
14  def premcloop(self):
15    dem = self.readmap("dem")
16    self.ldd = lddcreate(dem, 1e31, 1e31, 1e31, 1e31)
17    elevationMeteoStation = scalar(2058.1)
18    self.elevationAboveMeteoStation = dem - elevationMeteoStation
19    self.degreeDayFactor = 0.01
21  def initial(self):
22    self.snow = scalar(0)
23    self.temperatureLapseRate = 0.005 + (mapnormal() * 0.001)
24, "lapse")
25    self.temperatureCorrection = self.elevationAboveMeteoStation\
26         * self.temperatureLapseRate
28  def dynamic(self):
29    temperatureObserved = self.readDeterministic("tavgo")
30    precipitationObserved = self.readDeterministic("pr")
31    precipitation = max(0, precipitationObserved * (mapnormal() * 0.2 + 1.0))
32    temperature = temperatureObserved - self.temperatureCorrection
33    snowFall = ifthenelse(temperature < 0, precipitation, 0)
34    self.snow = self.snow + snowFall
35    potentialMelt = ifthenelse(temperature > 0, temperature\
36         * self.degreeDayFactor, 0)
37    actualMelt = min(self.snow, potentialMelt)
38    self.snow = max(0, self.snow - actualMelt)
39, "s")
41  def postmcloop(self):
42    names = ["s"]
43    mcaveragevariance(names, self.sampleNumbers(), self.timeSteps())
44    percentiles = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
45    mcpercentiles(names, percentiles, self.sampleNumbers(), self.timeSteps())
47  def updateWeight(self):
48    modelledData = self.readmap("s")
49    modelledAverageMap = areaaverage(modelledData, "")
50    observedAverageMap = self.readDeterministic("obsAv")
51    observedStdDevMap = ifthenelse(observedAverageMap > 0, observedAverageMap\
52         * 0.4, 0.01)
53    sum = maptotal(((observedAverageMap - modelledAverageMap) ** 2) / (-2.0\
54         * (observedStdDevMap ** 2)))
55    weight = exp(sum)
56    weightFloatingPoint, valid = cellvalue(weight, 1, 1)
57    return weightFloatingPoint
59  def suspend(self):
60    self.reportState(self.temperatureLapseRate, "lapse")
61    self.reportState(self.snow, "s")
63  def resume(self):
64    self.temperatureLapseRate = self.readState("lapse")
65    self.temperatureCorrection = self.elevationAboveMeteoStation\
66         * self.temperatureLapseRate
67    self.snow = self.readState("s")
69myModel = SnowModel()
70dynamicModel = DynamicFramework(myModel, lastTimeStep=180, firstTimestep=1)
71mcModel = MonteCarloFramework(dynamicModel, nrSamples=10)
72pfModel = SequentialImportanceResamplingFramework(mcModel)
73#pfModel = ResidualResamplingFramework(mcModel)
74pfModel.setFilterTimesteps([70, 100, 150])

Compared to the script in the three methods suspend, resume and updateWeight are added. The sections initial, dynamic, premcloop and postmcloop remain identical to the Monte Carlo version.

The state variables of the model are the snow height and the lapse rate. These variables are stored in the suspend() section with self.reportState() into the state variable directory. They are either cloned or replaced by the filter, or continued in the following filter period.

In the resume() method the lapse rate will now be set to either the same value as before the filter moment or to a new value cloned from another sample. The same procedure applies to the snow state variable. As the value of self.temperatureCorrection is dependent on the lapse rate it has to be re-initialised too.

To calculate the weigtht of a sample the model implements in updateWeight the equation (1) (Karssenberg2009).

For five meteorological stations in different elevation zones the average snow height values are compared. For the modelled data the zonal values are calculated with areaaverage, the observation values are read from disk. As the observedAverageMap contains missing values except at the measurement locations the maptotal operation yields the sum over the five elevation zones for the exponent of the weight calculation. The sample weight is afterwards extracted as individual floating point value.

Ensemble Kalman filter

The Ensemble Kalman Filter is a Monte Carlo approximation of the Kalman filter Evensen2003. Contrary to the cloning of the particle filter the ensemble Kalman filter modifies the state variables according to:

(2)\[n_t^{(n),+} = n_t^{(n),0} + P_t^0 H^T (H_t P_t^0 H_t^T + R_t)^{-1} (y_t^{(n)} - H_t x_t^{(n),0})\]

for each sample n, where \(x_t^{(n)}\) is a vector containing a realisation n at update moment \(t\) of model components for which observations are available. The superscript \(0\) indicates the prior state vector and superscript \(+\) indicates the posterior state vector calculated by the update. \(P_t^0\) is the ensemble covariance matrix. \(y_t^{(n)}\) is a realisation of the \(y_t\) vector holding the observations. \(R_t\) is the error covariance matrix and \(H_t\) the measurement operator (Evensen2003, Karssenberg2009).

The execution scheme is similar to the one of the particle filter:


 for each filter period:
   for each sample:
     if first period:
     for each timestep in filter period:


The user has to implement the setState, setObservations and the resume methods in the model class. As state variables (and eventually parameters) are modified the setState method now needs to return a vector (i.e. the \(x_t^{(n),0}\) in equation (2)) holding the values instead of an individual value. In the resume section the updated values (i.e. the \(x_t^{(n),+}\) in equation (2)) can be obtained with the getStateVector method.

For each update moment the user needs to provide the observed values \(y_t\) to the Ensemble Kalman framework with the setObservations method. The associated measurement error covariance matrix is set with setObservedMatrices. The measurement operator \(H_t\) can be set with the setMeasurementOperator method.


The script below shows again the snow model which is now executed within the Ensemble Kalman filter framework. The overall runtime of the model still accounts to 180 timesteps, 10 realisations are executed. Again three filter moments are chosen at the timesteps 70, 100 and 150.

  1#!/usr/bin/env python
  2# -*- coding: utf-8 -*-
  4from pcraster import *
  5from pcraster.framework import *
  6from numpy import *
  8class SnowModel(DynamicModel, MonteCarloModel, EnKfModel):
  9  def __init__(self):
 10    DynamicModel.__init__(self)
 11    MonteCarloModel.__init__(self)
 12    EnKfModel.__init__(self)
 13    setclone("")
 15  def premcloop(self):
 16    dem = self.readmap("dem")
 17    self.ldd = lddcreate(dem, 1e31, 1e31, 1e31, 1e31)
 18    elevationMeteoStation = scalar(2058.1)
 19    self.elevationAboveMeteoStation = dem - elevationMeteoStation
 20    self.degreeDayFactor = 0.01
 22  def initial(self):
 23    self.snow = scalar(0)
 24    self.temperatureLapseRate = 0.005 + (mapnormal() * 0.001)
 25, "lapse")
 26    self.temperatureCorrection = self.elevationAboveMeteoStation\
 27         * self.temperatureLapseRate
 29  def dynamic(self):
 30    temperatureObserved = self.readDeterministic("tavgo")
 31    precipitationObserved = self.readDeterministic("pr")
 32    precipitation = max(0, precipitationObserved * (mapnormal() * 0.2 + 1.0))
 33    temperature = temperatureObserved - self.temperatureCorrection
 34    snowFall = ifthenelse(temperature < 0, precipitation, 0)
 35    self.snow = self.snow + snowFall
 36    potentialMelt = ifthenelse(temperature > 0, temperature\
 37         * self.degreeDayFactor, 0)
 38    actualMelt = min(self.snow, potentialMelt)
 39    self.snow = max(0, self.snow - actualMelt)
 40, "s")
 42  def postmcloop(self):
 43    names = ["s"]
 44    mcaveragevariance(names, self.sampleNumbers(), self.timeSteps())
 45    percentiles = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
 46    mcpercentiles(names, percentiles, self.sampleNumbers(), self.timeSteps())
 48  def setState(self):
 49    modelledData = self.readmap("s")
 50    modelledAverageMap = areaaverage(modelledData, "")
 51, "modAv")
 52    values = numpy.zeros(5)
 53    values[0] = cellvalue(modelledAverageMap, 5, 5)[0]
 54    values[1] = cellvalue(modelledAverageMap, 8, 14)[0]
 55    values[2] = cellvalue(modelledAverageMap, 23, 24)[0]
 56    values[3] = cellvalue(modelledAverageMap, 28, 12)[0]
 57    values[4] = cellvalue(modelledAverageMap, 34, 28)[0]
 58    return values
 60  def setObservations(self):
 61    timestep = self.currentTimeStep()
 62    observedData = readmap(generateNameT("obsAv", timestep))
 63    values = numpy.zeros(5)
 64    values[0] = cellvalue(observedData, 1, 1)[0]
 65    values[1] = cellvalue(observedData, 3, 1)[0]
 66    values[2] = cellvalue(observedData, 11, 1)[0]
 67    values[3] = cellvalue(observedData, 18, 1)[0]
 68    values[4] = cellvalue(observedData, 40, 4)[0]
 70    # creating the observation matrix (nrObservations x nrSamples)
 71    # here without added noise
 72    observations = numpy.array([values,]*self.nrSamples()).transpose()
 74    # creating the covariance matrix (nrObservations x nrObservations)
 75    # here just random values
 76    covariance = numpy.random.random((5, 5))
 78    self.setObservedMatrices(observations, covariance)
 80  def resume(self):
 81    vec = self.getStateVector(self.currentSampleNumber())
 82    modelledAverageMap = self.readmap("modAv")
 83    modvalues = numpy.zeros(5)
 84    modvalues[0] = cellvalue(modelledAverageMap, 1, 1)[0]
 85    modvalues[1] = cellvalue(modelledAverageMap, 3, 1)[0]
 86    modvalues[2] = cellvalue(modelledAverageMap, 11, 1)[0]
 87    modvalues[3] = cellvalue(modelledAverageMap, 18, 1)[0]
 88    modvalues[4] = cellvalue(modelledAverageMap, 40, 4)[0]
 89    oldSnowMap = self.readmap("s")
 90    self.zones = readmap("")
 91    newSnowCells = scalar(0)
 92    for i in range(1, 6):
 93      snowPerZone = ifthenelse(self.zones == nominal(i), oldSnowMap, scalar(0))
 94      snowCellsPerZone = ifthenelse(snowPerZone > scalar(0), boolean(1),\
 95         boolean(0))
 96      corVal = vec[i - 1] - modvalues[i - 1]
 97      newSnowCells = ifthenelse(snowCellsPerZone == 1, max(0,snowPerZone\
 98         + scalar(corVal)), newSnowCells)
 99    self.snow = newSnowCells
102myModel = SnowModel()
103dynamicModel = DynamicFramework(myModel, lastTimeStep=180, firstTimestep=1)
104mcModel = MonteCarloFramework(dynamicModel, nrSamples=10)
105ekfModel = EnsKalmanFilterFramework(mcModel)
106ekfModel.setFilterTimesteps([70, 100, 150])

In the setState section the average snow pack is calculated from the sample snow cover map. The map holding the average values is stored in the sample subdirectory in order to avoid recalculating the values in the resume section. The average value for each zone is extracted as an individual value and inserted into a numpy matrix. This matrix is returned to the framework.

In the resume section the array returned by the getStateVector now holds the updated state variables. In the following the correction factor for the snow values is calculated as the difference between the modelled averaged snow heights and the average snow heights returned by the Ensemble Kalman filter. For each zone this correction factor is applied to the snow pack cell values in order to obtain the new snow pack map.