API

coeff_to_real Convert the coefficents to real space
generate_checkerboard(size[, square_shape]) Generate a 2-phase checkerboard microstructure
generate_delta Generate a delta microstructure
generate_multiphase Constructs microstructures for an arbitrary number of phases given the size of the domain, and relative grain size.
plot_microstructures(*arrs[, titles, cmap, …]) Plot a set of microstructures
solve_cahn_hilliard(x_data[, n_steps, …]) Generate response for Cahn-Hilliard.
solve_fe Solve the elasticity problem
FlattenTransformer Reshape data ready for a PCA.
LegendreTransformer([n_state, min_, max_, …]) Legendre transformer for Sklearn pipelines
LocalizationRegressor([redundancy_func]) Perform the localization in Sklearn pipelines
PrimitiveTransformer([n_state, min_, max_, …]) Primitive transformer for Sklearn pipelines
ReshapeTransformer(shape) Reshape data ready for the LocalizationRegressor
TwoPointCorrelation([periodic_boundary, …]) Calculate the 2-point stats for two arrays
pymks.coeff_to_real()

Convert the coefficents to real space

Parameters:
  • coeff – the coefficient in Fourier space
  • new_shape – the shape of the coefficients in real space
Returns:

the coefficients in real space

pymks.generate_checkerboard(size, square_shape=(1, ))

Generate a 2-phase checkerboard microstructure

Parameters:
  • size – the size of the domain
  • square_shape – the shape of each subdomain
Returns:

a microstructure of shape “(1,) + shape”

>>> print(generate_checkerboard((4,)).compute())
[[0 1 0 1]]
>>> print(generate_checkerboard((3, 3)).compute())
[[[0 1 0]
  [1 0 1]
  [0 1 0]]]
>>> print(generate_checkerboard((3, 3), (2,)).compute())
[[[0 0 1]
  [0 0 1]
  [1 1 0]]]
>>> print(generate_checkerboard((5, 8), (2, 3)).compute())
[[[0 0 0 1 1 1 0 0]
  [0 0 0 1 1 1 0 0]
  [1 1 1 0 0 0 1 1]
  [1 1 1 0 0 0 1 1]
  [0 0 0 1 1 1 0 0]]]
pymks.generate_delta()

Generate a delta microstructure

Parameters:
  • n_phases – number of phases
  • shape – the shape of the microstructure
  • chunks – how to chunk the sample axis
Returns:

a dask array of delta microstructures

>>> a = generate_delta(5, (3, 4), chunks=(5,))
>>> a.shape
(20, 3, 4)
>>> a.chunks
((5, 5, 5, 5), (3,), (4,))
pymks.generate_multiphase()

Constructs microstructures for an arbitrary number of phases given the size of the domain, and relative grain size.

Parameters:
  • shape (tuple) – (n_sample, n_x, n_y, n_z)
  • grain_size (tuple) – size of the grain size in the microstructure
  • volume_fraction (tuple) – the percent volume fraction for each phase
  • chunks (int) – chunks_size of the first
  • percent_variance (float) – the percent variance for each value of volume_fraction
Returns:

A dask array of random-multiphase microstructures microstructures for the system of shape given by shape.

Example:

>>> x_tru = np.array([[[0, 0, 0],
...                    [0, 1, 0],
...                    [1, 1, 1]]])
>>> da.random.seed(10)
>>> x = generate_multiphase(
...     shape=(1, 3, 3),
...     grain_size=(1, 1),
...     volume_fraction=(0.5, 0.5)
... )
>>> print(x.shape)
(1, 3, 3)
>>> print(x.chunks)
((1,), (3,), (3,))
>>> assert np.allclose(x, x_tru)
pymks.plot_microstructures(*arrs, titles=(), cmap=None, colorbar=True)

Plot a set of microstructures

Parameters:
  • *arrs – any number of 2D arrays to plot
  • titles – a sequence of titles with len(*arrs)
  • cmap – any matplotlib colormap
pymks.solve_cahn_hilliard(x_data, n_steps=1, delta_x=0.25, delta_t=0.001, gamma=1.0)

Generate response for Cahn-Hilliard.

Parameters:
  • x_data – dask array chunked along the sample axis
  • n_steps – number of time steps used
  • delta_x – the grid spacing
  • delta_t – the time step size
  • gamma – Cahn-Hilliard parameter
>>> import dask.array as da
>>> x_data = 2 * da.random.random((1, 6, 6), chunks=(1, 6, 6)) - 1
>>> y_data = solve_cahn_hilliard(x_data)
>>> y_data.chunks
((1,), (6,), (6,))
pymks.solve_fe()

Solve the elasticity problem

Parameters:
  • x_data – microstructure with shape (n_samples, n_x, …)
  • elastic_modulus – the elastic modulus in each phase
  • poissons_ration – the poissons ratio for each phase
  • macro_strain – the macro strain
  • delta_x – the grid spacing
Returns:

a dictionary of strain, displacement and stress with stress and strain of shape (n_samples, n_x, …, 3) and displacement shape of (n_samples, n_x + 1, …, 2)

class pymks.FlattenTransformer

Reshape data ready for a PCA.

Two point correlation data need to be flatten before performing PCA. This class flattens the two point correlation data for use in a Sklearn pipeline.

>>> data = np.arange(50).reshape((2, 5, 5))
>>> FlattenTransformer().transform(data).shape
(2, 25)
fit(*_)

Only necessary to make pipelines work

static transform(x_data)

Transform the X data

Parameters:x_data – the data to be transformed
class pymks.LegendreTransformer(n_state=2, min_=0.0, max_=1.0, chunks=None)

Legendre transformer for Sklearn pipelines

>>> from toolz import pipe
>>> data = da.from_array(np.array([[0, 0.5, 1]]), chunks=(1, 3))
>>> pipe(
...     LegendreTransformer(),
...     lambda x: x.fit(None, None),
...     lambda x: x.transform(data).compute(),
... )
array([[[ 0.5, -1.5],
        [ 0.5,  0. ],
        [ 0.5,  1.5]]])

Instantiate a LegendreTransformer

Parameters:
  • n_state – the number of local states
  • min – the minimum local state
  • max – the maximum local state
  • chunks – chunks size for state axis
class pymks.LocalizationRegressor(redundancy_func=<function LocalizationRegressor.<lambda>>)

Perform the localization in Sklearn pipelines

Allows the localization to be part of a Sklearn pipeline

>>> make_data = lambda s, c: da.from_array(
...     np.arange(np.prod(s),
...               dtype=float).reshape(s),
...     chunks=c
... )
>>> X = make_data((6, 4, 4, 3), (2, 4, 4, 1))
>>> y = make_data((6, 4, 4), (2, 4, 4))
>>> y_out = LocalizationRegressor().fit(X, y).predict(X)
>>> assert np.allclose(y, y_out)
>>> print(
...     pipe(
...         LocalizationRegressor(),
...         lambda x: x.fit(X, y.reshape(6, 16)).predict(X).shape
...     )
... )
(6, 16)

Instantiate a LocalizationRegressor

Parameters:redundancy_func – function to remove redundant elements from the coefficient matrix
coeff_resize(shape)

Generate new model with larger coefficients

Parameters:shape – the shape of the new coefficients
Returns:a new model with larger influence coefficients
fit(x_data, y_data)

Fit the data

Parameters:
  • x_data – the X data to fit
  • y_data – the y data to fit
Returns:

the fitted LocalizationRegressor

predict(x_data)

Predict the data

Parameters:x_data – the X data to predict
Returns:The predicted y data
class pymks.PrimitiveTransformer(n_state=2, min_=0.0, max_=1.0, chunks=None)

Primitive transformer for Sklearn pipelines

>>> from toolz import pipe
>>> assert pipe(
...     PrimitiveTransformer(),
...     lambda x: x.fit(None, None),
...     lambda x: x.transform(np.array([[0, 0.5, 1]])).compute(),
...     lambda x: np.allclose(x,
...         [[[1. , 0. ],
...           [0.5, 0.5],
...           [0. , 1. ]]])
... )

Instantiate a PrimitiveTransformer

Parameters:
  • n_state – the number of local states
  • min – the minimum local state
  • max – the maximum local state
  • chunks – chunks size for state axis
class pymks.ReshapeTransformer(shape)

Reshape data ready for the LocalizationRegressor

Sklearn likes flat image data, but MKS expects shaped data. This class transforms the shape of flat data into shaped image data for MKS.

>>> data = np.arange(18).reshape((2, 9))
>>> ReshapeTransformer((None, 3, 3)).fit(None, None).transform(data).shape
(2, 3, 3)

Instantiate a ReshapeTransformer

Parameters:shape – the shape of the reshaped data (ignoring the first axis)
fit(*_)

Only necessary to make pipelines work

transform(x_data)

Transform the X data

Parameters:x_data – the data to be transformed
class pymks.TwoPointCorrelation(periodic_boundary=True, cutoff=None, correlations=None)

Calculate the 2-point stats for two arrays

Instantiate a TwoPointCorrelation

Parameters:
  • periodic_boundary – whether the boundary conditions are periodic
  • cutoff – cutoff radius of interest for the 2PtStatistics field
  • correlations1 – an index
  • correlations2 – an index
fit(*_)

Only necessary to make pipelines work

transform(data)

Transform the data

Parameters:data – the data to be transformed
Returns:the 2-point stats array