Tutorial 02 - Approximation of n-D functions with Polynomial Chaos

This tutorial covers the extension of Tutorial 01 - Approximation of Functions with Polynomial Chaos to an arbitrary number of stochastic parameters as input for the target function.

For reasons of simplicity, we consider the real valued function

\[f(x) = -\sin(4\pi x_1) \sin(3\pi x_2) + 2.\]

as the target function throughout this tutorial, i.e.,

import numpy as np
import pythia as pt

def target_function(x: np.ndarray) -> np.ndarray:
    val = -np.sin(4 * np.pi * x[:, 0]) * np.sin(3 * np.pi * x[:, 1]) + 2
    return val.reshape(-1, 1)

First, we define the stochastic input parameter \(x=(x_1,x_2)\) with \(x\sim\mathcal{U}([0,1]^2)\).

param1 = pt.parameter.Parameter(name="x_1", domain=[0, 1], distribution="uniform")
param2 = pt.parameter.Parameter(name="x_2", domain=[0, 1], distribution="uniform")
params = [param1, param2]

Next, we need to specify which terms the PC expansion should include. For this, we need the IndexSet class of PyThia. We will just take all the expansion terms from the zeroth up to a certain polynomial degree, but for different degrees in each component.

sdim = [13, 11]  # stochastic dimensions (tensor)
indices = pt.index.tensor_set(sdim)
index_set = pt.index.IndexSet(indices)

What remains is to generate training data for the PC expansion. Here, we use a specific strategy to generate samples that are optimal for training. For more detail on the optimality of the sampling strategy see the work of Cohen & Migliorati (2017). Try and see how the surrogate changes, if you use a different number of samples or a different sampling strategy.

N = 1000
s = pt.sampler.WLSTensorSampler(params, sdim)
x_train = s.sample(N)
w_train = s.weight(x_train)
y_train = target_function(x_train)

We assembled all the data we need to compute our surrogate and can finally use the PolynomialChaos class of the PyThia.chaos method.

surrogate = pt.chaos.PolynomialChaos(params, index_set, x_train, w_train, y_train)

The PolynomialChaos object we just created can do a lot of things, but for the moment we are only interested in the approximation of our function. Let us generate some testing data to see how good our approximation is.

N = 1000
s = pt.sampler.WLSTensorSampler(params, sdim)
x_train = s.sample(N)
w_train = s.weight(x_train)
y_train = target_function(x_train)

Note

For testing, we choose sample realizations drawn according to the distribution of the parameters, not with respect to the weighted Least-Squares distribution we used to generate the training data.

This concludes the second tutorial. Below you find the complete script you can use to run on your own system. This script also computes the approximation error of the PC surrogate and plots the approximation against the target.

Complete Script

The complete source code listed below is located in the tutorials/ subdirectory of the repository root.

 1"""
 2File: tutorials/tutorial_02.py
 3Author: Nando Hegemann
 4Gitlab: https://gitlab.com/Nando-Hegemann
 5Description: Tutorial 02 - Approximation of n-D functions with Polynomial Chaos.
 6SPDX-License-Identifier: LGPL-3.0-or-later OR Hippocratic-3.0-ECO-MEDIA-MIL
 7"""
 8import numpy as np
 9import pythia as pt
10
11
12def target_function(x: np.ndarray) -> np.ndarray:
13    """Target function.
14
15    Parameters
16    ----------
17    x : np.ndarray
18    """
19    val = -np.sin(4 * np.pi * x[:, 0]) * np.sin(3 * np.pi * x[:, 1]) + 2
20    return val.reshape(-1, 1)
21
22
23print("Tutorial 02 - Approximation of n-D functions with Polynomial Chaos")
24
25print("set parameters")
26param1 = pt.parameter.Parameter(name="x_1", domain=[0, 1], distribution="uniform")
27param2 = pt.parameter.Parameter(name="x_2", domain=[0, 1], distribution="uniform")
28params = [param1, param2]
29
30sdim = [13, 11]  # stochastic dimensions (tensor)
31indices = pt.index.tensor_set(sdim)
32index_set = pt.index.IndexSet(indices)
33print("multiindex information:")
34print(f"    number of indices: {index_set.shape[0]}")
35print(f"    dimension: {index_set.shape[1]}")
36print(f"    maximum dimension: {index_set.max}")
37print(f"    number of sobol indices: {len(index_set.sobol_tuples)}")
38
39N = 1000
40print(f"generate training data ({N})")
41s = pt.sampler.WLSTensorSampler(params, sdim)
42x_train = s.sample(N)
43w_train = s.weight(x_train)
44y_train = target_function(x_train)
45
46print("compute pc expansion")
47surrogate = pt.chaos.PolynomialChaos(params, index_set, x_train, w_train, y_train)
48
49N = 1000
50print(f"generate test data ({N})")
51test_sampler = pt.sampler.ParameterSampler(params)
52x_test = test_sampler.sample(N)
53y_test = target_function(x_test)
54y_approx = surrogate.eval(x_test)
55
56# error computation
57e_L2 = np.sqrt(np.sum((y_test - y_approx) ** 2) / x_test.shape[0])
58e_L2_rel = e_L2 / np.sqrt(np.sum((y_test) ** 2) / x_test.shape[0])
59e_max = np.max(np.abs(y_test - y_approx), axis=0)
60e_max_rel = np.max(np.abs(y_test - y_approx) / np.abs(y_test), axis=0)
61print(f"error L2 (abs/rel):  {e_L2:4.2e}/{e_L2_rel:4.2e}")
62print(f"error max (abs/rel): {e_max[0]:4.2e}/{e_max_rel[0]:4.2e}")