Tutorial 02 - Approximation of n-D functions with Polynomial Chaos

This tutorial covers the extension of Tutorial 01 - Approximation of Functions with Polynomial Chaos to an arbitrary number of stochastic parameters as input for the target function.

For reasons of simplicity, we consider the real valued function

\[f(x) = -\sin(4\pi x_1) \sin(3\pi x_2) + 2.\]

as the target function throughout this tutorial, i.e.,

import numpy as np
def target_function(x):
    val = -np.sin(4*np.pi*x[:, 0])*np.sin(3*np.pi*x[:, 1]) + 2
    return val.reshape(-1, 1)

First, we define the stochastic input parameter \(x=(x_1,x_2)\) with \(x\sim\mathcal{U}([0,1]^2)\).

param1 = pt.parameter.Parameter(
   index=1, name="x_1", domain=[0, 1], distribution='uniform'
)
param2 = pt.parameter.Parameter(
   index=2, name="x_2", domain=[0, 1], distribution='uniform'
)
params = [param1, param2]

Next, we need to specify which terms the PC expansion should include. For this, we need the IndexSet class of PyThia. We will just take all the expansion terms from the zeroth up to a certain polynomial degree, but for different degrees in each component.

sdim = [13, 11]  # stochastic dimensions (tensor)
indices = pt.index.tensor(sdim)
index_set = pt.index.IndexSet(indices)

What remains is to generate training data for the PC expansion. Here, we use a specific strategy to generate samples that are optimal for training. For more detail on the optimality of the sampling strategy see the work of Cohen & Migliorati (2017). Try and see how the surrogate changes, if you use a different number of samples or a different sampling strategy.

s = pt.sampler.WLSTensorSampler(params, sdim)
x_train = s.sample(1000)
w_train = s.weight(x_train)
y_train = target_function(x_train)

We assembled all the data we need to compute our surrogate and can finally use the PolynomialChaos class of the PyThia.chaos method.

surrogate = pt.chaos.PolynomialChaos(params, index_set, x_train, w_train, y_train)

The PolynomialChaos object we just created can do a lot of things, but for the moment we are only interested in the approximation of our function. Let us generate some testing data to see how good our approximation is.

test_sampler = pt.sampler.ParameterSampler(params)
x_test = test_sampler.sample(1000)
y_test = target_function(x_test)
y_approx = surrogate.eval(x_test)

Note

For testing, we choose sample realizations drawn according to the distribution of the parameters, not with respect to the weighted Least-Squares distribution we used to generate the training data.

This concludes the second tutorial. Below you find the complete script you can use to run on your own system. This script also computes the approximation error of the PC surrogate and plots the approximation against the target.

Complete Script

import os
import matplotlib.pyplot as plt
import numpy as np
import pythia as pt


def target_function(x):
   val = -np.sin(4*np.pi*x[:, 0])*np.sin(3*np.pi*x[:, 1]) + 2
   return val.reshape(-1, 1)


print("[{}] run TUTORIAL 02 - 2D approximation with pc".format(pt.misc.now()))

print(f"[{pt.misc.now()}] set parameters")
param1 = pt.parameter.Parameter(
   index=1, name="x_1", domain=[0, 1], distribution='uniform'
)
param2 = pt.parameter.Parameter(
   index=2, name="x_2", domain=[0, 1], distribution='uniform'
)
params = [param1, param2]

sdim = [13, 11]
indices = pt.index.tensor(sdim)
index_set = pt.index.IndexSet(indices)
print(f"[{pt.misc.now()}] multiindex information:")
print(f"    number of indices: {index_set.shape[0]}")
print(f"    dimension: {index_set.shape[1]}")
print(f"    maximum dimension: {index_set.max}")
print(f"    number of sobol indices: {len(index_set.sobol_tuples)}")

N = 1000
print(f"[{pt.misc.now()}] generate training data ({N})")
s = pt.sampler.WLSTensorSampler(params, sdim)
x_train = s.sample(N)
w_train = s.weight(x_train)
y_train = target_function(x_train)

print(f"[{pt.misc.now()}] compute pc expansion")
surrogate = pt.chaos.PolynomialChaos(
   params, index_set, x_train, w_train, y_train)

N = 1000
print(f"[{pt.misc.now()}] generate test data ({N})")
test_sampler = pt.sampler.ParameterSampler(params)
x_test = test_sampler.sample(N)
y_test = target_function(x_test)
y_approx = surrogate.eval(x_test)

# ERROR COMPUTATION
e_L2 = np.sqrt(np.sum((y_test-y_approx)**2)/x_test.shape[0])
e_L2_rel = e_L2 / np.sqrt(np.sum((y_test)**2)/x_test.shape[0])
e_max = np.max(np.abs(y_test-y_approx), axis=0)
e_max_rel = np.max(np.abs(y_test-y_approx)/np.abs(y_test), axis=0)
print(f"[{pt.misc.now()}] error L2 (abs/rel):  {e_L2:4.2e}/{e_L2_rel:4.2e}")
print(f"[{pt.misc.now()}] error max (abs/rel): {e_max[0]:4.2e}/{e_max_rel[0]:4.2e}")

# PLOT APPROXIMATION
path = "./img/"
os.makedirs(path, exist_ok=True)

x1 = np.linspace(0, 1, 200).reshape(-1, 1)
x2 = np.linspace(0, 1, 200).reshape(-1, 1)
xx = pt.misc.cartProd([x1, x2])  # cartesian product of two vectors
y_target = target_function(xx).reshape(x1.size, -1).T
y_approx = surrogate.eval(xx).reshape(x1.size, -1).T

fig, ax = plt.subplots(figsize=(10, 4), nrows=1, ncols=3)
fig.suptitle("Approximation of 2D function with PyThia")
im0 = ax[0].contourf(y_target, 15, extent=[0, 1, -3, 3])
ax[0].set_title("target function")
im1 = ax[1].contourf(y_approx, 15, extent=[0, 1, -3, 3])
ax[1].set_title("surrogate")
im2 = ax[2].contourf(y_target-y_approx, 15, extent=[0, 1, -3, 3])
ax[2].set_title("target - surrogate")
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im2, cax=cbar_ax)
plt.savefig(path+"tutorial_02.png")

print(f"[{pt.misc.now()}] save plot to: {path}")