2. Using a trained reflectorch model#

In the first subsection, we show how to use the inference class (which abstracts away the implementation details) on experimental data.

In the last subsection, we illustrate some implementation details involved in using a trained model (which resembles the steps performed during the training process, including the generation of synthetic data).

2.1. Simplified use of a trained reflectorch model on experimental data#

import torch
import numpy as np
import matplotlib.pyplot as plt

from reflectorch import EasyInferenceModel, interp_reflectivity, get_param_labels

torch.manual_seed(0); # set seed for reproducibility

The EasyInferenceModel class simplifies the inference step for single reflectivity curves. We initialize the inference model by providing the name of the configuration file of a pretrained model (config_name), either with or without the ‘.yaml’ extension. By default, the file name of the saved model weights for a specific configuration has the format ‘model_’ + config_name + extension (either ‘.pt’ or ‘.safetensors’), but a different file name can be specified via the model_name argument. The root_dir argument can be used to specify a project directory containing the configs and saved_models subdirectories (containing the configurations and model weights, respectively), if different from the default package directory. The weights_format argument specifies the format (and extension) of the weights file, which can be either ‘pt’ (the default Pytorch weight format) or ‘safetensors’ (a weight format which addresses security risks associated with the default Pytorch weight format, as recommended for sharing via Huggingface). If the configuration or weights files are not found locally, they can be downloaded automatically from the Huggingface repository specified with the repo_id argument. Additionally, the device argument can be set to either cuda (for inference on the GPU) or cpu (for computers without a GPU with CUDA support).

We create and instance of the inference class, which also prints some details about the model such as the parameter ranges.

inference_model = EasyInferenceModel(config_name='mc25',
                                     model_name=None,
                                     root_dir=None,
                                     weights_format='safetensors',
                                     repo_id='valentinsingularity/reflectivity',
                                     device='cuda',
                                     )
Configuration file `D:\Github Projects\reflectorch\reflectorch\configs\mc25.yaml` found locally.
Weights file `D:\Github Projects\reflectorch\reflectorch\saved_models\model_mc25.safetensors` found locally.
Model mc25 loaded. Number of parameters: 13.37 M
The model corresponds to a `standard_model` parameterization with 2 layers (8 predicted parameters)
Parameter types and total ranges:
- thicknesses: [0.0, 500.0]
- roughnesses: [0.0, 20.0]
- slds: [0.0, 50.0]
Allowed widths of the prior bound intervals (max-min):
- thicknesses: [0.01, 500.0]
- roughnesses: [0.01, 20.0]
- slds: [0.01, 5.0]
The model was trained on curves discretized at 128 uniform points between between q_min=0.02 and q_max=0.15

We consider the following reflectivity curve (already preprocessed by standard procedures):

data = np.loadtxt('../exp_data/data_PTCDI-C3.txt', delimiter='\t', skiprows=1)
q_exp = data[..., 0]
curve_exp = data[..., 1]

print(curve_exp.shape, q_exp.shape, q_exp.min(), q_exp.max())
(141,) (141,) 0.0142368058 0.213456644

We interpolate the reflectivity curve to the q points the model was trained on:

q_model = inference_model.trainer.loader.q_generator.q.cpu().numpy()
exp_curve_interp = interp_reflectivity(q_model, q_exp, curve_exp)

print(exp_curve_interp.shape)
(128,)

This is a measurement for a two-layer film in which we have some prior knowlede about the investigated system. We know the substrate is silicon on top of which sits a thin silicon oxide layer, followed by a perylene diimide (PTCDI-C3) layer. Thus, we can set narrow prior bounds for the scattering length densities (around known value for these materials) and for the thickness of the silicon oxide layer, but we set wide prior bounds for the roughnesses and for the thickness of the perylene diimide layer. We specify the prior bounds for the parameters as a list of tuples (min_prior_bound, max_prior_bound).

prior_bounds = [(1., 400.), (1., 10.), #layer thicknesses (top to bottom)
                (0., 20.), (0., 15.), (0., 15.), #interlayer roughnesses (top to bottom)
                (10., 13.), (20.,21.), (20., 21.)] #real layer slds (top to bottom)

Now, we can call the predict method of the inference model and provide as input the reflectivity curve and the prior bounds in order to obtain the neural network predictions. If the clip_prediction argument is True, the predictions are clipped to ensure they are not outside the interval defined by the prior bounds. If the polish_prediction argument is True, the predictions are further polished using a conventional least mean squares (LMS) fit (currently only available for the standard box-model parameterization). Additionally, if fit_growth is also True, an additional parameter is introduced during the LMS polishing to account for the change in the thickness of the upper layer during the in-situ measurement of the reflectivity curve during film deposition (necessary when the aquisition rate is slow compared to the growth rate), the maximum possible change being given by the max_d_change argument. If the use_q_shift argument is True, the prediction is performed for a batch of slightly shifted versions of the input curve and the best result is returned, which is meant to mitigate the influence of imperfect sample alignment, as introduced in Greco et al. (applicable only for models with fixed q-discretization). The method returns a dictionary containing the predicted parameters (both as a BasicParams object and as a Numpy array). If calc_pred_curve is True, the reflectivity curve corresponding to the predicted parameters is also computed and added to the dictionary.

prediction_dict = inference_model.predict(reflectivity_curve=exp_curve_interp,
                                          prior_bounds=prior_bounds,
                                          clip_prediction=False,
                                          polish_prediction=True,
                                          use_q_shift=False,
                                          calc_pred_curve=True,
                                          )
print(prediction_dict.keys())


pred_params = prediction_dict['predicted_params_array']
pred_curve = prediction_dict['predicted_curve']
dict_keys(['predicted_params_object', 'predicted_params_array', 'param_names', 'predicted_curve', 'polished_params_array', 'polished_curve'])
n_layers = inference_model.trainer.loader.prior_sampler.max_num_layers
for param_name, pred_param_val, polished_param_val in zip(prediction_dict["param_names"], pred_params, prediction_dict["polished_params_array"]):
        print(f'{param_name.ljust(14)} -> Predicted: {pred_param_val:.2f}       Polished: {polished_param_val:.2f}')
Thickness L2   -> Predicted: 194.75       Polished: 188.19
Thickness L1   -> Predicted: 13.27       Polished: 10.00
Roughness L2   -> Predicted: 19.60       Polished: 20.00
Roughness L1   -> Predicted: 4.74       Polished: 4.55
Roughness sub  -> Predicted: 8.10       Polished: 5.51
SLD L2         -> Predicted: 10.87       Polished: 11.66
SLD L1         -> Predicted: 20.43       Polished: 21.00
SLD sub        -> Predicted: 20.37       Polished: 20.51
Hide code cell source
fig, ax = plt.subplots(1,1,figsize=(6,6))
ax.set_yscale('log')
ax.set_ylim(0.5e-10, 5)
ax.set_xlabel('q [$Å^{-1}$]', fontsize=20)
ax.set_ylabel('R(q)', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
ax.tick_params(axis='both', which='minor', labelsize=15)
y_tick_locations = [10**(-2*i) for i in range(6)]
ax.yaxis.set_major_locator(plt.FixedLocator(y_tick_locations))
    
ax.scatter(q_model, exp_curve_interp, c='g', s=2, label='interp exp. curve')
ax.plot(q_model, pred_curve, c='r', lw=1, label='pred. curve')
ax.plot(q_model, prediction_dict['polished_curve'], c='orange', lw=1, label='polished pred. curve')

ax.legend(loc='upper right', fontsize=14);
_images/69d7c263b51df1bddc93a007ae2fecfd0545cf92af67c8882a0df8552576fa66.png

Now we make a prediction for a similar structure, except that the top layer is fullerene (C60) instead. Thus we only change the prior bounds for the SLD of this layer.

Hide code cell source
data = np.loadtxt('../exp_data/data_C60.txt', delimiter='\t', skiprows=1)
q_exp = data[..., 0]
curve_exp = data[..., 1]

q_model = inference_model.trainer.loader.q_generator.q.cpu().numpy()
exp_curve_interp = interp_reflectivity(q_model, q_exp, curve_exp)

#print(curve_exp.shape, q_exp.shape, q_exp.min(), q_exp.max())

prior_bounds = [(1., 400.), (1., 10.), #layer thicknesses (top to bottom)
                (0., 20.), (0., 15.), (0., 15.), #interlayer roughnesses (top to bottom)
                (13., 18.), (20.,21.), (20., 21.)] #real layer slds (top to bottom)

prediction_dict = inference_model.predict(reflectivity_curve=exp_curve_interp,
                                          prior_bounds=prior_bounds,
                                          clip_prediction=False,
                                          polish_prediction=True,
                                          use_q_shift=False,
                                          calc_pred_curve=True,
                                          )


pred_params = prediction_dict['predicted_params_array']
pred_curve = prediction_dict['predicted_curve']

n_layers = inference_model.trainer.loader.prior_sampler.max_num_layers
for param_name, param_val in zip(prediction_dict["param_names"], pred_params):
        print(f'{param_name.ljust(14)} : {param_val:.2f}')

fig, ax = plt.subplots(1,1,figsize=(6,6))
ax.set_yscale('log')
ax.set_ylim(0.5e-10, 5)
ax.set_xlabel('q [$Å^{-1}$]', fontsize=20)
ax.set_ylabel('R(q)', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
ax.tick_params(axis='both', which='minor', labelsize=15)
y_tick_locations = [10**(-2*i) for i in range(6)]
ax.yaxis.set_major_locator(plt.FixedLocator(y_tick_locations))
    
ax.scatter(q_model, exp_curve_interp, c='g', s=2, label='interp exp. curve')
ax.plot(q_model, pred_curve, c='r', lw=1, label='pred. curve')
ax.plot(q_model, prediction_dict['polished_curve'], c='orange', lw=1, label='polished pred. curve')

ax.legend(loc='upper right', fontsize=14);
Thickness L2   : 159.75
Thickness L1   : 12.36
Roughness L2   : 8.89
Roughness L1   : 3.97
Roughness sub  : 5.77
SLD L2         : 15.09
SLD L1         : 20.80
SLD sub        : 20.06
_images/77a8f0f38cadbc8c7f839a79f83e3f660be56d1b7a1fa27a424af2175f6da1ee.png

Above, we used a model trained with a fixed q discretization (i.e. 1D CNN embedding network) and interpolated the experimental reflectivity curves to that specific discretization. Now we try to predict using a model trained with a variable q discretization (i.e. FNO embedding network) without interpolating the reflectivity curves. In this scenario, we must also provide the q values of the reflectivity curve when calling the predict method.

inference_model = EasyInferenceModel(config_name='mc-o4')
Configuration file `D:\Github Projects\reflectorch\reflectorch\configs\mc-o4.yaml` found locally.
Weights file `D:\Github Projects\reflectorch\reflectorch\saved_models\model_mc-o4.safetensors` found locally.
Model mc-o4 loaded. Number of parameters: 14.61 M
The model corresponds to a `standard_model` parameterization with 2 layers (8 predicted parameters)
Parameter types and total ranges:
- thicknesses: [0.0, 500.0]
- roughnesses: [0.0, 20.0]
- slds: [0.0, 50.0]
Allowed widths of the prior bound intervals (max-min):
- thicknesses: [0.01, 500.0]
- roughnesses: [0.01, 20.0]
- slds: [0.01, 5.0]
The model was trained on curves discretized at a number between 128 and 256 of uniform points between between q_min in [0.01, 0.05] and q_max in [0.15, 0.4]
Hide code cell source
data = np.loadtxt('../exp_data/data_PTCDI-C3.txt', delimiter='\t', skiprows=1)
q_exp = data[..., 0]
curve_exp = data[..., 1]

#print(curve_exp.shape, q_exp.shape, q_exp.min(), q_exp.max())

prior_bounds = [(1., 400.), (1., 10.), #layer thicknesses (top to bottom)
                (0., 20.), (0., 15.), (0., 15.), #interlayer roughnesses (top to bottom)
                (10., 13.), (20.,21.), (20., 21.)] #real layer slds (top to bottom)

prediction_dict = inference_model.predict(reflectivity_curve=curve_exp,
                                          q_values=q_exp,
                                          prior_bounds=prior_bounds,
                                          clip_prediction=False,
                                          polish_prediction=False,
                                          calc_pred_curve=True,
                                          )

pred_params = prediction_dict['predicted_params_array']
pred_curve = prediction_dict['predicted_curve']

n_layers = inference_model.trainer.loader.prior_sampler.max_num_layers
for param_name, param_val in zip(prediction_dict["param_names"], pred_params):
        print(f'{param_name.ljust(14)} : {param_val:.2f}')

fig, ax = plt.subplots(1,1,figsize=(6,6))
ax.set_yscale('log')
ax.set_ylim(0.5e-10, 5)
ax.set_xlabel('q [$Å^{-1}$]', fontsize=20)
ax.set_ylabel('R(q)', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
ax.tick_params(axis='both', which='minor', labelsize=15)
y_tick_locations = [10**(-2*i) for i in range(6)]
ax.yaxis.set_major_locator(plt.FixedLocator(y_tick_locations))
    
ax.scatter(q_exp, curve_exp, c='b', s=2, label='exp. curve')
ax.plot(q_exp, pred_curve, c='r', lw=1, label='pred. curve')
#ax.plot(q_exp, prediction_dict['polished_curve'], c='orange', lw=1, label='polished pred. curve')

ax.legend(loc='upper right', fontsize=14);
Thickness L2   : 186.60
Thickness L1   : 8.87
Roughness L2   : 19.92
Roughness L1   : 3.64
Roughness sub  : 5.71
SLD L2         : 11.17
SLD L1         : 20.33
SLD sub        : 20.53
_images/0ae91612f138d3ab1cdefa9e1efabfa84fdc34d5cc5e41ad3dcdedf16f2c0f51.png
Hide code cell source
data = np.loadtxt('../exp_data/data_C60.txt', delimiter='\t', skiprows=1)
q_exp = data[..., 0]
curve_exp = data[..., 1]

#print(curve_exp.shape, q_exp.shape, q_exp.min(), q_exp.max())

prior_bounds = [(1., 400.), (1., 10.), #layer thicknesses (top to bottom)
                (0., 20.), (0., 15.), (0., 15.), #interlayer roughnesses (top to bottom)
                (13., 18.), (20.,21.), (20., 21.)] #real layer slds (top to bottom)

prediction_dict = inference_model.predict(reflectivity_curve=curve_exp,
                                          q_values=q_exp,
                                          prior_bounds=prior_bounds,
                                          clip_prediction=False,
                                          polish_prediction=True,
                                          calc_pred_curve=True,
                                          )


pred_params = prediction_dict['predicted_params_array']
pred_curve = prediction_dict['predicted_curve']

fig, ax = plt.subplots(1,1,figsize=(6,6))
ax.set_yscale('log')
ax.set_ylim(0.5e-10, 5)
ax.set_xlabel('q [$Å^{-1}$]', fontsize=20)
ax.set_ylabel('R(q)', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
ax.tick_params(axis='both', which='minor', labelsize=15)
y_tick_locations = [10**(-2*i) for i in range(6)]
ax.yaxis.set_major_locator(plt.FixedLocator(y_tick_locations))
    
ax.scatter(q_exp, curve_exp, c='b', s=2, label='exp. curve')
ax.plot(q_exp, pred_curve, c='r', lw=1, label='pred. curve')
ax.plot(q_exp, prediction_dict['polished_curve'], c='orange', lw=1, label='polished pred. curve')

ax.legend(loc='upper right', fontsize=14);
_images/078bb50cbe0afd2a0602f4ee65696ceacf8652004e8421eae188ff001e1b8120.png

2.2. Filtering available configuration files#

The configuration files available on Huggingface can be filtered based on their properties using the HuggingfaceQueryMatcher class. We first initialize an instance of the query matcher which first downloads the configuration files to a temporary directory (it can take around 1 minute).

from reflectorch import HuggingfaceQueryMatcher

hf_query_matcher = HuggingfaceQueryMatcher(repo_id='valentinsingularity/reflectivity')

Then we can provide a query (i.e. a dictionary of key-value pairs) as argument to its get_matching_configs method to get a list of configurations which match that query. The query should be formatted according to the hierarchical structure of the YAML configuration files (which are described in detail in the following sections of the documentation).

For keys containing the param_ranges subkey a configuration is selected if the value of the query (i.e. desired parameter range) is a subrange of the parameter range in the configuration, in all other cases the values must match exactly.

query = {
    'dset.prior_sampler.kwargs.max_num_layers': 3,
    'dset.prior_sampler.kwargs.param_ranges.slds': [0., 100.],
}

matching_configs = hf_query_matcher.get_matching_configs(query)
for config in matching_configs:
    print(config)
mc42.yaml
mc43.yaml
mc44.yaml
mc45.yaml
query = {
    'dset.prior_sampler.kwargs.max_num_layers': 2,
    'model.network.cls': 'NetworkWithPriorsFnoEmb',
    'model.network.kwargs.mlp_activation': 'gelu',
}

matching_configs = hf_query_matcher.get_matching_configs(query)
for config in matching_configs:
    print(config)
mc-o1.yaml
mc-o10.yaml
mc-o11.yaml
mc-o12.yaml
mc-o2.yaml
mc-o3.yaml
mc-o4.yaml
mc-o5.yaml
mc-o7.yaml
mc-o8.yaml
mc-o9.yaml

2.3. Detailed use of a reflectorch model on synthetic data#

import torch
import matplotlib.pyplot as plt

from ipywidgets import interact
from reflectorch import get_trainer_by_name

torch.manual_seed(0); # set seed for reproducibility

In order to import a trained reflectorch model, we first have to specify the name of the trained model we wish to load. This name should match the name of the YAML configuration file used for training that model. Here we load a model for a 2-layer box (or slab) parameterization of the thin film SLD profile.

trained_model_name = 'mc1'

Next, we initialize an instance of the PointEstimatorTrainer class using the get_trainer_by_name method.

Note

The manner in which get_trainer_by_name is used above assumes that the configuration and weight files are available locally (e.g. such as after cloning the entire repository). If the package was installed in editable model, the configuration files are read from the configs directory located inside the repository directory, otherwise the path to the directory containing the configuration file should also be specified using the config_dir argument. The load_weights argument must be set to True in order for the saved weights of the neural network to be loaded, otherwise the network weights are randomly initialized.

trainer = get_trainer_by_name(config_name=trained_model_name, load_weights=True)
Model mc1 loaded. Number of parameters: 13.37 M

2.3.1. Generating synthetic data#

We can generate a batch of synthetic data using the get_batch method of the data loader:

batch_size = 64

trainer.loader.calc_denoised_curves = True
simulated_data = trainer.loader.get_batch(batch_size=batch_size)

This method returns a dictionary with 4 entries indexed by the following keys:

  1. params - an instance of the BasicParams class containing the physical (unscaled) values of the generated parameters, the generated minimum prior bound for each parameter and the generated maximum prior bound for each parameter (see the paper for more details about the generation process)

  2. scaled_params - a Pytorch Tensor containing the parameters, minimum bounds and maximum bounds, all scaled to the ML-friendly range [-1, 1]

  3. q_values - a Pytorch Tensor containing the reciprocal space (q) positions of the points in the reflectivity curve, in units of Å-1

  4. scaled_noisy_curves - a Pytorch Tensor containing the simulated reflectivity curves (including added noise) scaled to the ML-friendly range [-1, 1]

  5. curves - a Pytorch Tensor containing the (unscaled) theoretical reflectivity curves witout added noise. It is only computed if we set trainer.loader.calc_denoised_curves to True (which by default is False)

We can inspect one of the simulated curves:

q = simulated_data['q_values']
scaled_noisy_curves = simulated_data['scaled_noisy_curves']
unscaled_noisy_curves = trainer.loader.curves_scaler.restore(scaled_noisy_curves)
unscaled_denoised_curve = simulated_data['curves']

def plot_refl_curve(i=0):
    fig, ax = plt.subplots(1,1,figsize=(6,6))

    ax.set_yscale('log')
    ax.set_ylim(0.5e-10, 5)

    ax.set_xlabel('q [$Å^{-1}$]', fontsize=20)
    ax.set_ylabel('R(q)', fontsize=20)

    ax.tick_params(axis='both', which='major', labelsize=15)
    ax.tick_params(axis='both', which='minor', labelsize=15)
    
    y_tick_locations = [10**(-2*i) for i in range(6)]
    ax.yaxis.set_major_locator(plt.FixedLocator(y_tick_locations))
        
    ax.scatter(q[i].cpu().numpy(), unscaled_noisy_curves[i].cpu().numpy() + 1e-10, c='b', s=2, label='simulated curve')
    ax.plot(q[i].cpu().numpy(), unscaled_denoised_curve[i].cpu().numpy() + 1e-10, c='g', lw=1, label='theoretical curve')

    ax.legend(loc='upper right', fontsize=14)

plot_refl_curve(i=0)
_images/96a843c24bb16816e4ec21c2b6b5c481326b522c884f1655ff3105627ceabe0d.png

This trained model corresponds to a 2 layer parameterization of the SLD profile (in addition to the substrate), which corresponds to 8 predicted film parameters:

n_layers = simulated_data['params'].max_layer_num
n_params = simulated_data['params'].num_params

print(f'Number of layers: {n_layers},  Number of film parameters: {n_params}')
Number of layers: 2,  Number of film parameters: 8

2.3.2. Applying the model to synthetic data#

The input to the neural network consists in the batch of reflectivity curves together with the prior bounds (minimum and maximum) for each film parameter. For experimental data, the prior bounds can be set according to the prior knowledge about the investigated thin film. In this example on simulated data, we use the prior bounds already sampled during the data generation process (i.e. meant for training the model) which ensures reasonable values for the prior bounds.

In the scaled_params tensor the first 8 columns correspond to the scaled ground truth values of the film parameters, the next 8 columns to the scaled minimum bounds for the parameters and the last 8 to the scaled maximum bounds for the parameters. Thus, we select the last 16 columns as our input prior bounds:

scaled_bounds = simulated_data['scaled_params'][..., n_params:]

print(scaled_bounds.shape)
torch.Size([64, 16])

The neural network can be accessed as the model attribute of the trainer. By providing the scaled reflectivity curves and the scaled prior bounds as inputs to the network, we obtain the predictions for the parameters. We should also make sure that the input is of the float data type.

Note

The neural network must be first set to evaluation mode, as this influences the functionality of some neural network components such as the batch normalization layers.

with torch.no_grad():
    trainer.model.eval()
    
    scaled_predicted_params = trainer.model(scaled_noisy_curves.float(), scaled_bounds.float())
    
print(scaled_predicted_params.shape)
torch.Size([64, 8])

Now we need to restore the scaled predicted parameters to their unscaled (physical) values. Since the predicted parameters are scaled with respect to the input prior bounds, these are also required for the rescaling. We can concatenate the scaled_predicted_params and scaled_bounds tensors along the last tensor axis and provide them as input to the restore_params method of the prior sampler object (which can be accessed as trainer.loader.prior_sampler), the output being an instance of the BasicParams class.

restored_predictions = trainer.loader.prior_sampler.restore_params(torch.cat([scaled_predicted_params, scaled_bounds], dim=-1))
print(restored_predictions)
BasicParams(batch_size=64, max_layer_num=2, device=cuda:0)

The physical predictions can then be accessed using the corresponding attribute for each parameter type.

pred_idx = 0

print(f'Predicted thicknesses: {restored_predictions.thicknesses[pred_idx]}')
print(f'Predicted roughnesses: {restored_predictions.roughnesses[pred_idx]}')
print(f'Predicted layer SLDs: {restored_predictions.slds[pred_idx]}')
Predicted thicknesses: tensor([267.9554, 386.5484], device='cuda:0', dtype=torch.float64)
Predicted roughnesses: tensor([30.8834, 17.2048, 17.0454], device='cuda:0', dtype=torch.float64)
Predicted layer SLDs: tensor([ 7.2722, 23.1493, 36.2304], device='cuda:0', dtype=torch.float64)

Based on the predictions, we can easily simulate the corresponding reflectivity curves by using the reflectivity method of the previously obtained BasicParams object, which takes the q values as input:

predicted_curves = restored_predictions.reflectivity(q)

We can observe the input reflectivity curves alongside the curves corresponding to the neural network prediction. Additionally, we can print the prediction for each parameter alongside its ground truth value and its prior bounds.

Thickness L2   --> True: 189.28 Predicted: 200.57  Input prior bounds: (126.42, 230.66)
Thickness L1   --> True: 436.63 Predicted: 439.41  Input prior bounds: (124.63, 483.08)
Roughness L2   --> True: 18.21 Predicted: 18.66  Input prior bounds: (5.05, 33.49)
Roughness L1   --> True: 13.98 Predicted: 12.88  Input prior bounds: (3.72, 58.31)
Roughness sub  --> True: 20.10 Predicted: 21.09  Input prior bounds: (2.49, 50.91)
SLD L2         --> True: 5.09 Predicted: 4.57  Input prior bounds: (0.74, 5.45)
SLD L1         --> True: 9.08 Predicted: 8.52  Input prior bounds: (7.92, 10.62)
SLD sub        --> True: 34.77 Predicted: 34.72  Input prior bounds: (32.72, 35.74)
_images/97a75e185ec40a99a279ded3b8e62245aa03f9f8d10d001217300c9383d0d765.png
Thickness L2   --> True: 206.48 Predicted: 204.81  Input prior bounds: (54.91, 405.15)
Thickness L1   --> True: 77.36 Predicted: 77.76  Input prior bounds: (72.33, 98.73)
Roughness L2   --> True: 40.31 Predicted: 40.89  Input prior bounds: (16.38, 54.46)
Roughness L1   --> True: 26.28 Predicted: 25.56  Input prior bounds: (13.28, 31.51)
Roughness sub  --> True: 17.46 Predicted: 17.19  Input prior bounds: (9.69, 34.16)
SLD L2         --> True: 22.81 Predicted: 22.75  Input prior bounds: (20.52, 24.03)
SLD L1         --> True: 19.75 Predicted: 20.14  Input prior bounds: (19.57, 21.90)
SLD sub        --> True: 7.71 Predicted: 8.86  Input prior bounds: (7.11, 10.68)
_images/5a15129c3f2683e9a278d594c69424c4a81058c3f8d86a555ff3efd8e2343a0b.png
Thickness L2   --> True: 181.37 Predicted: 209.03  Input prior bounds: (12.05, 284.52)
Thickness L1   --> True: 347.88 Predicted: 348.13  Input prior bounds: (347.46, 348.64)
Roughness L2   --> True: 53.28 Predicted: 51.27  Input prior bounds: (42.37, 58.23)
Roughness L1   --> True: 4.53 Predicted: 4.20  Input prior bounds: (2.63, 9.84)
Roughness sub  --> True: 21.87 Predicted: 21.86  Input prior bounds: (17.86, 22.75)
SLD L2         --> True: 22.55 Predicted: 22.40  Input prior bounds: (21.77, 26.07)
SLD L1         --> True: 44.41 Predicted: 44.66  Input prior bounds: (44.14, 44.89)
SLD sub        --> True: 12.72 Predicted: 13.99  Input prior bounds: (12.72, 15.42)
_images/1ac30c6dc424753591b75155c55218b97af8a9d4f39ba1f550e3726b99a8c203.png