repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
asydorchuk/ml | classes/cs231n/assignment2/ConvolutionalNetworks.ipynb | mit | # As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
"""
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
"""
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
"""
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
"""
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
"""
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
"""
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
"""
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
"""
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
"""
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
"""
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
"""
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
"""
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
"""
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
"""
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
"""
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
"""
np.random.seed(27)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
"""
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
"""
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
"""
np.random.seed(27)
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
"""
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
"""
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
"""
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
"""
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
"""
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
"""
# Train a really good model on CIFAR-10
"""
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation
"""
|
tpin3694/tpin3694.github.io | python/filter_dataframes.ipynb | mit | import pandas as pd
"""
Explanation: Title: Filter pandas Dataframes
Slug: filter_dataframes
Summary: Filter pandas Dataframes
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Import modules
End of explanation
"""
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3],
'coverage': [25, 94, 57, 62, 70]}
df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])
df
"""
Explanation: Create Dataframe
End of explanation
"""
df['name']
"""
Explanation: View Column
End of explanation
"""
df[['name', 'reports']]
"""
Explanation: View Two Columns
End of explanation
"""
df[:2]
"""
Explanation: View First Two Rows
End of explanation
"""
df[df['coverage'] > 50]
"""
Explanation: View Rows Where Coverage Is Greater Than 50
End of explanation
"""
df[(df['coverage'] > 50) & (df['reports'] < 4)]
"""
Explanation: View Rows Where Coverage Is Greater Than 50 And Reports Less Than 4
End of explanation
"""
|
mathnathan/notebooks | dissertation/Single Gaussian.ipynb | mit | a = -0.7
j_vals = []
kl_vals = []
mus = np.linspace(0,1,100)
for mu in mus:
j_vals.append(J(mu,p_sig,a)[0])
kl_vals.append(KL(mu,p_sig)[0])
fig = plt.figure(figsize=(15,5))
p_vals = p(mus)
plt.plot(mus, p_vals/p_vals.max(), label="$p(x)$")
#plt.plot(mus, j_vals/np.max(np.abs(j_vals)), label='$J$')
plt.plot(mus, j_vals, label='$J$')
plt.plot(mus, kl_vals/np.max(np.abs(kl_vals)), label='$KL$')
plt.title("Divergences with alpha = {}".format(a))
plt.xlabel('$\mu$')
plt.legend()
plt.show()
"""
Explanation: Divergences as a Function of $\mu_q$
Let us start by simply varying $\mu_q$ and seeing the result. We will hold $\sigma_q$ fixed to $\sigma_p$ and $\alpha = -0.5$.
End of explanation
"""
dj_vals = []
dkl_vals = []
mus = np.linspace(0,1.0,100)
for mu in mus:
dj_vals.append(dJ_dmu(mu,p_sig,a)[0])
dkl_vals.append(dKL_dmu(mu,p_sig)[0])
fig = plt.figure(figsize=(15,5))
p_vals = p(mus)
plt.plot(mus, p_vals/p_vals.max(), label="$p(x)$")
#plt.plot(mus, dj_vals/np.max(np.abs(dj_vals)), label='$\partial J/\partial \mu_q$')
plt.plot(mus, dj_vals, label='$\partial J/\partial \mu_q$')
plt.plot(mus, dkl_vals/np.max(np.abs(dkl_vals)), label='$\partial KL/\partial \mu_q$')
plt.title("Derivative of Divergences with alpha = {}".format(a))
plt.xlabel('$\mu$')
plt.legend()
plt.show()
"""
Explanation: Derivative of Divergences as a function of $\mu_q$
As before we will vary $\mu_q$ but this time we are evaluating the derivatives of the divergence. We will hold $\sigma_q$ fixed to $\sigma_p$ and $\alpha = -0.5$.
End of explanation
"""
a = -0.7
j_optims = []
j_maxErrs = []
kl_optims = []
kl_maxErrs = []
tot_mus_list = [1000,2000,3000,4000,5000]
for tot_mus in tot_mus_list:
print("Operating on {} mus...".format(tot_mus))
dj_vals = []
dj_errs = []
dkl_vals = []
dkl_errs = []
mus = np.linspace(0.4,0.6,tot_mus)
for mu in mus:
j_quad, j_err = dJ_dmu(mu,p_sig,a)
dj_vals.append(j_quad)
dj_errs.append(j_err)
kl_quad, kl_err = dKL_dmu(mu,p_sig)
dkl_vals.append(kl_quad)
dkl_errs.append(kl_err)
j_optims.append(mus[np.argmin(np.abs(dj_vals))])
j_maxErrs.append(np.max(dj_errs))
kl_optims.append(mus[np.argmin(np.abs(dkl_vals))])
kl_maxErrs.append(np.max(dkl_errs))
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(121)
ax1.set_title("Error")
ax1.set_xlabel("Number of $\mu$s in Calcuation")
ax1.plot(tot_mus_list, j_maxErrs, label="max $J$-err")
ax1.plot(tot_mus_list, kl_maxErrs, label="max $KL$-err")
ax1.legend()
ax2 = fig.add_subplot(122)
ax2.set_title("$|\max{J}-\min{KL}|$")
ax2.set_xlabel("Number of $\mu$s in Calcuation")
ax2.plot(tot_mus_list, np.abs(np.array(j_optims)-np.array(kl_optims)), label="error")
ax2.legend()
"""
Explanation: Finding the Zeros of the Derivatives
This is a little complicated becauase every step of the optimization algorithm requires the calculation of a quadrature. Error propagation could be an issue. To build confidence in the results we will find the location of the extrema as we increase the resolution around the extrema. If the value becomes more and more precise than we might believe that it is converging.
End of explanation
"""
j_vals = []
kl_vals = []
alphas = np.linspace(-3,0.999,1000)
for a in alphas:
j_vals.append(J(p_mean,p_sig,a)[0])
kl_vals.append(KL(p_mean,p_sig)[0])
fig = plt.figure(figsize=(15,5))
plt.plot(alphas, j_vals, label='$J$')
plt.plot(alphas, kl_vals, label='$KL$')
plt.title("Divergences vs alpha")
plt.xlabel('alpha')
plt.legend()
plt.show()
"""
Explanation: Examining the Divergences as a function of $\alpha$
This time we will fix both the means $\mu_q = \mu_p$ and variances $\sigma_q = \sigma_p$.
End of explanation
"""
dj_vals = []
dkl_vals = []
alphas = np.linspace(-3,0.999,1000)
for a in alphas:
dj_vals.append(dJ_dmu(p_mean,p_sig,a)[0])
dkl_vals.append(dKL_dmu(p_mean,p_sig)[0])
fig = plt.figure(figsize=(15,5))
plt.plot(alphas, dj_vals, label='$\partial J/\partial \mu_q$')
plt.plot(alphas, dkl_vals, label='$\partial KL/\partial \mu_q$')
plt.title("Derivative of Divergences vs alpha")
plt.xlabel('alpha')
plt.legend()
plt.show()
"""
Explanation: Derivatives of Divergences vs alpha
This is interesting but what we really care about is where $\alpha$ changes the extrema in anyway.
End of explanation
"""
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
mu_min = 0.4
mu_max = 0.6
num_mus = 1000
mus = np.linspace(mu_min, mu_max, num_mus)
sig_min = 0.0001
sig_max = 0.01
num_sigs = 1000
sigmas = np.linspace(sig_min, sig_max, num_sigs)
mu,sigma = np.meshgrid(mus,sigmas)
vals = np.array((mu,sigma))
z = np.ndarray(mu.shape)
for i in range(len(mu[0])):
for j in range(len(sigma[0])):
m,s = vals[:,i,j]
z[i,j] = J(m,s,-0.7)[0]
fig = plt.figure(figsize=(16,12))
ax = fig.gca(projection='3d')
ax.plot_surface(mu, sigma, z, rstride=5, cstride=5, alpha=0.3)
cset = ax.contour(mu, sigma, z, zdir='z', offset=-0.1, cmap=cm.coolwarm)
cset = ax.contour(mu, sigma, z, zdir='x', offset=mu_min, cmap=cm.coolwarm)
cset = ax.contour(mu, sigma, z, zdir='y', offset=sig_max, cmap=cm.coolwarm)
ax.set_xlabel('$\mu$')
ax.set_xlim(mu_min, mu_max)
ax.set_ylabel('$\sigma$')
ax.set_ylim(sig_min, sig_max)
ax.set_zlabel('$f$')
ax.set_zlim(-0.1,100)
plt.show()
z[0,0]
len(np.where(np.isnan(z))[0])
"""
Explanation: Cost Surface for the Convolution
Let us now examine the full function $f(\mu, \Sigma)$ for our given toy problem. We can explore how it changes as we modify the convolution with and without the entropy term.
End of explanation
"""
|
martinjrobins/hobo | examples/interfaces/automatic-differentiation-using-autograd.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
import warnings
from timeit import repeat
import pints
import pints.toy as toy
import autograd.numpy as np
from autograd.scipy.integrate import odeint
from autograd.builtins import tuple
from autograd import grad
"""
Explanation: Using autograd to calculate the gradient of a log-likelihood
It is straightforward to use the automatic differentiation library autograd to take the derivative of log-likelihoods defined in pints. Below is an example of how to do this.
WARNING: We currently find this method of caculating model sensitivities to be quite slow for most time-series models, and so do not recommended it for use.
End of explanation
"""
class AutoGradFitzhughNagumoModel(pints.ForwardModel):
def simulate(self, parameters, times):
y0 = np.array([-1, 1], dtype=float)
def rhs(y, t, p):
V, R = y
a, b, c = p
dV_dt = (V - V**3 / 3 + R) * c
dR_dt = (V - a + b * R) / -c
return np.array([dV_dt, dR_dt])
return odeint(rhs, y0, times, tuple((parameters,)))
def n_parameters(self):
return 3
def n_outputs(self):
return 2
"""
Explanation: We begin be defining a model, identical to the Fitzhugh Nagumo toy model implemented in pints. The corresponding toy model in pints has its evaluateS1() method defined, so we can compare the results using automatic differentiation.
End of explanation
"""
class AutoGradLogLikelihood(pints.ProblemLogLikelihood):
def __init__(self, likelihood):
self.likelihood = likelihood
f = lambda x: self.likelihood(x)
self.likelihood_grad = grad(f)
def __call__(self, x):
return self.likelihood(x)
def evaluateS1(self, x):
values = self.likelihood(x)
gradient = self.likelihood_grad(x)
return values, gradient
def n_parameters(self):
return self.likelihood.n_parameters()
autograd_model = AutoGradFitzhughNagumoModel()
pints_model = pints.toy.FitzhughNagumoModel()
"""
Explanation: Now we wrap an existing pints likelihood class, and use the autograd.grad function to calculate the gradient of the given log-likelihood
End of explanation
"""
# Create some toy data
real_parameters = np.array(pints_model.suggested_parameters(), dtype='float64')
times = pints_model.suggested_times()
pints_values = pints_model.simulate(real_parameters, times)
autograd_values = autograd_model.simulate(real_parameters, times)
plt.figure()
plt.plot(times, autograd_values)
plt.plot(times, pints_values)
plt.show()
"""
Explanation: Now create some toy data and ensure that the new model gives the same output as the toy model in pints
End of explanation
"""
noise = 0.1
values = pints_values + np.random.normal(0, noise, pints_values.shape)
# Create an object with links to the model and time series
autograd_problem = pints.MultiOutputProblem(autograd_model, times, values)
pints_problem = pints.MultiOutputProblem(pints_model, times, values)
# Create a log-likelihood function
autograd_log_likelihood = pints.GaussianKnownSigmaLogLikelihood(autograd_problem, noise)
autograd_likelihood = AutoGradLogLikelihood(autograd_log_likelihood)
pints_log_likelihood = pints.GaussianKnownSigmaLogLikelihood(pints_problem, noise)
"""
Explanation: Add some noise to the values, and then create log-likelihoods using both the new model, and the pints model
End of explanation
"""
autograd_likelihood.evaluateS1(real_parameters)
pints_log_likelihood.evaluateS1(real_parameters)
"""
Explanation: We can calculate the gradients of both likelihood functions at the given parameters to make sure that they are the same
End of explanation
"""
statement = 'autograd_likelihood.evaluateS1(real_parameters)'
setup = 'from __main__ import autograd_likelihood, real_parameters'
time_taken = min(repeat(stmt=statement, setup=setup, number=1, repeat=5))
'Elapsed time: {:.0f} ms'.format(1000. * time_taken)
statement = 'pints_log_likelihood.evaluateS1(real_parameters)'
setup = 'from __main__ import pints_log_likelihood, real_parameters'
time_taken = min(repeat(stmt=statement, setup=setup, number=1, repeat=5))
'Elapsed time: {:.0f} ms'.format(1000. * time_taken)
"""
Explanation: Now we'll time both functions. You can see that the function using autgrad is significantly slower than the in-built evaluateS1 function for the PINTS model, which calculates the sensitivities analytically.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.0/examples/sun.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.0,<2.1"
"""
Explanation: Sun (single rotating star)
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star(starA='sun')
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
print b['sun']
"""
Explanation: Setting Parameters
End of explanation
"""
b.set_value('teff', 1.0*u.solTeff)
b.set_value('rpole', 1.0*u.solRad)
b.set_value('mass', 1.0*u.solMass)
b.set_value('period', 24.47*u.d)
"""
Explanation: Let's set all the values of the sun based on the nominal solar values provided in the units package.
End of explanation
"""
b.set_value('incl', 23.5*u.deg)
b.set_value('distance', 1.0*u.AU)
"""
Explanation: And so that we can compare with measured/expected values, we'll observe the sun from the earth - with an inclination of 23.5 degrees and at a distance of 1 AU.
End of explanation
"""
print b.get_quantity('teff')
print b.get_quantity('rpole')
print b.get_quantity('mass')
print b.get_quantity('period')
print b.get_quantity('incl')
print b.get_quantity('distance')
"""
Explanation: Checking on the set values, we can see the values were converted correctly to PHOEBE's internal units.
End of explanation
"""
b.add_dataset('lc', pblum=1*u.solLum)
"""
Explanation: Running Compute
Let's add a light curve so that we can compute the flux at a single time and compare it to the expected value. We'll set the passband luminosity to be the nominal value for the sun.
End of explanation
"""
b.run_compute(protomesh=True, pbmesh=True, irrad_method='none', distortion_method='rotstar')
"""
Explanation: Now we run our model and store the mesh so that we can plot the temperature distributions and test the size of the sun verse known values.
End of explanation
"""
axs, artists = b['protomesh'].plot(facecolor='teffs')
axs, artists = b['pbmesh'].plot(facecolor='teffs')
print "teff: {} ({})".format(b.get_value('teffs', dataset='pbmesh').mean(),
b.get_value('teff', context='component'))
print "rpole: {} ({})".format(b.get_value('rpole', dataset='pbmesh'),
b.get_value('rpole', context='component'))
"""
Explanation: Comparing to Expected Values
End of explanation
"""
print "rmin (pole): {} ({})".format(b.get_value('rs', dataset='pbmesh').min(),
b.get_value('rpole', context='component'))
print "rmax (equator): {} (>{})".format(b.get_value('rs', dataset='pbmesh').max(),
b.get_value('rpole', context='component'))
print "logg: {}".format(b.get_value('loggs', dataset='pbmesh').mean())
print "flux: {}".format(b.get_quantity('fluxes@model')[0])
"""
Explanation: For a rotating sphere, the minimum radius should occur at the pole and the maximum should occur at the equator.
End of explanation
"""
|
yashdeeph709/Algorithms | PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Collections Module-checkpoint.ipynb | apache-2.0 | from collections import Counter
"""
Explanation: Collections Module
The collections module is a built-in module that implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers. We've already gone over the basics: dict, list, set, and tuple.
Now we'll learn about the alternatives that the collections module provides.
Counter
Counter is a dict subclass which helps count hashable objects. Inside of it elements are stored as dictionary keys and the counts of the objects are stored as the value.
Lets see how it can be used:
End of explanation
"""
l = [1,2,2,2,2,3,3,3,1,2,1,12,3,2,32,1,21,1,223,1]
Counter(l)
"""
Explanation: Counter() with lists
End of explanation
"""
Counter('aabsbsbsbhshhbbsbs')
"""
Explanation: Counter with strings
End of explanation
"""
s = 'How many times does each word show up in this sentence word times each each word'
words = s.split()
Counter(words)
# Methods with Counter()
c = Counter(words)
c.most_common(2)
"""
Explanation: Counter with words in a sentence
End of explanation
"""
sum(c.values()) # total of all counts
c.clear() # reset all counts
list(c) # list unique elements
set(c) # convert to a set
dict(c) # convert to a regular dictionary
c.items() # convert to a list of (elem, cnt) pairs
Counter(dict(list_of_pairs)) # convert from a list of (elem, cnt) pairs
c.most_common()[:-n-1:-1] # n least common elements
c += Counter() # remove zero and negative counts
"""
Explanation: Common patterns when using the Counter() object
End of explanation
"""
from collections import defaultdict
d = {}
d['one']
d = defaultdict(object)
d['one']
for item in d:
print item
"""
Explanation: defaultdict
defaultdict is a dictionary like object which provides all methods provided by dictionary but takes first argument (default_factory) as default data type for the dictionary. Using defaultdict is faster than doing the same using dict.set_default method.
A defaultdict will never raise a KeyError. Any key that does not exist gets the value returned by the default factory.
End of explanation
"""
d = defaultdict(lambda: 0)
d['one']
"""
Explanation: Can also initilaize with default values:
End of explanation
"""
print 'Normal dictionary:'
d = {}
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
d['d'] = 'D'
d['e'] = 'E'
for k, v in d.items():
print k, v
"""
Explanation: OrderedDict
An OrderedDict is a dictionary subclass that remembers the order in which its contents are added.
Fro example a normal dictionary:
End of explanation
"""
print 'OrderedDict:'
d = collections.OrderedDict()
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
d['d'] = 'D'
d['e'] = 'E'
for k, v in d.items():
print k, v
"""
Explanation: An Ordered Dictionary:
End of explanation
"""
print 'Dictionaries are equal? '
d1 = {}
d1['a'] = 'A'
d1['b'] = 'B'
d2 = {}
d2['b'] = 'B'
d2['a'] = 'A'
print d1 == d2
"""
Explanation: Equality with an Ordered Dictionary
A regular dict looks at its contents when testing for equality. An OrderedDict also considers the order the items were added.
A normal Dictionary:
End of explanation
"""
print 'Dictionaries are equal? '
d1 = collections.OrderedDict()
d1['a'] = 'A'
d1['b'] = 'B'
d2 = collections.OrderedDict()
d2['b'] = 'B'
d2['a'] = 'A'
print d1 == d2
"""
Explanation: An Ordered Dictionary:
End of explanation
"""
t = (12,13,14)
t[0]
"""
Explanation: namedtuple
The standard tuple uses numerical indexes to access its members, for example:
End of explanation
"""
from collections import namedtuple
Dog = namedtuple('Dog','age breed name')
sam = Dog(age=2,breed='Lab',name='Sammy')
frank = Dog(age=2,breed='Shepard',name="Frankie")
"""
Explanation: For simple use cases, this is usually enough. On the other hand, remembering which index should be used for each value can lead to errors, especially if the tuple has a lot of fields and is constructed far from where it is used. A namedtuple assigns names, as well as the numerical index, to each member.
Each kind of namedtuple is represented by its own class, created by using the namedtuple() factory function. The arguments are the name of the new class and a string containing the names of the elements.
You can basically think of namedtuples as a very quick way of creating a new object/class type with some attribute fields.
For example:
End of explanation
"""
sam
sam.age
sam.breed
sam[0]
"""
Explanation: We construct the namedtuple by first passing the object type name (Dog) and then passing a string with the variety of fields as a string with spaces between the field names. We can then call on the various attributes:
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_home/2020_regex.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: Tech - expressions régulières
Les expressions régulières sont utilisées pour rechercher des motifs dans un texte tel que des mots, des dates, des nombres...
End of explanation
"""
poeme = """
A noir, E blanc, I rouge, U vert, O bleu, voyelles,
Je dirai quelque jour vos naissances latentes.
A, noir corset velu des mouches éclatantes
Qui bombillent autour des puanteurs cruelles,
Golfe d'ombre; E, candeur des vapeurs et des tentes,
Lance des glaciers fiers, rois blancs, frissons d'ombelles;
I, pourpres, sang craché, rire des lèvres belles
Dans la colère ou les ivresses pénitentes;
U, cycles, vibrements divins des mers virides,
Paix des pâtis semés d'animaux, paix des rides
Que l'alchimie imprime aux grands fronts studieux;
O, suprême clairon plein de strideurs étranges,
Silences traversés des Mondes et des Anges:
—O l'Oméga, rayon violet de Ses Yeux!
"""
"""
Explanation: Enoncé
Le texte suivant est un poème d'Arthur Rimbaud, Les Voyelles. On veut en extraire tous les mots.
End of explanation
"""
def extract_words(text):
# utiliser les exrp
pass
extract_words(poeme)
"""
Explanation: Exercice 1 : utiliser les expression régulières pour extraire tous les mots
En python, il faut utiliser le module re. Il faudra lire le paragraphe sur la syntaxe Regular Expression Syntax. Autres lectures : Expressions régulières.
End of explanation
"""
import unicodedata
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
strip_accents('têtu')
import re
def extract_words(text):
text_sans_accent = strip_accents(text)
return re.findall('[A-Za-z]+', text_sans_accent)
mots = extract_words(poeme)
mots[:5]
"""
Explanation: Exercice 2 : utiliser les expression régulières pour extraire tous les mots se terminant par la lettre s
Exercice 3 : utiliser les expression régulières pour remplacer tous les "de" en 2
Les fonctions finditer ou sub pourraient vous être utile.
Exercice 4 : utiliser les expression régulières pour extraire les lignes des rimes en elle ou elles ou aile ou ailes
La fonction finditer pourrait vous être utile.
Réponses
Exercice 1 : utiliser les expression régulières pour extraire tous les mots
Les accents sont traités comme des lettres différentes par les expressions régulières. On peut soit les garder, soit les remplacer. Pour ce faire, on peut lire What is the best way to remove accents (normalize) in a Python unicode string?.
End of explanation
"""
def extract_words_lettre(text, lettre='s'):
text_sans_accent = strip_accents(text)
return re.findall('[A-Za-z]+[' + lettre + ']\\b',
text_sans_accent)
mots = extract_words_lettre(poeme, 'se')
mots[:5]
"""
Explanation: Exercice 2 : utiliser les expression régulières pour extraire tous les mots se terminant par la lettre s
On modifie le motif pour qu'il se termine par la lettre s. Le caractère \b est utilisé pour signifier que cette lettre ne peut se trouver qu'à la fin d'un mot.
End of explanation
"""
re.sub("de\\b", "2", poeme)
"""
Explanation: Exercice 3 : utiliser les expression régulières pour remplacer tous les "de" en 2
End of explanation
"""
re.findall("(((aile)|(elle))s?\\b)", poeme)
"""
Explanation: Exercice 4 : utiliser les expression régulières pour extraire les lignes des rimes en elle ou elles ou aile ou ailes
Un petit essai avant la solution.
End of explanation
"""
for m in re.finditer("(((aile)|(elle))s?\\b)", poeme):
print('%02d-%02d: %s' % (
m.start(), m.end(), m.group(0)))
"""
Explanation: Un autre pour se convaincre...
End of explanation
"""
for i, ligne in enumerate(poeme.split('\n')):
for m in re.finditer("(((aile)|(elle))s?\\b)", ligne):
print('% 2d: %02d-%02d/%02d: %s' % (
i + 1, m.start(), m.end(), len(ligne), ligne))
"""
Explanation: On mélange. On découpe en ligne d'abord, et on applique le même traitement sur chaque ligne.
End of explanation
"""
|
southpaw94/MachineLearning | TextExamples/3547_02_Code.ipynb | gpl-2.0 | %load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
"""
Explanation: Sebastian Raschka, 2015
Python Machine Learning Essentials
Chapter 2 - Training Machine Learning Algorithms for Classification
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
import numpy as np
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
"""
Explanation: Sections
Implementing a perceptron learning algorithm in Python
Training a perceptron model on the Iris dataset
Adaptive linear neurons and the convergence of learning
Implementing an adaptive linear neuron in Python
<br>
<br>
Implementing a perceptron learning algorithm in Python
[back to top]
End of explanation
"""
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/iris/iris.data', header=None)
df.tail()
"""
Explanation: <br>
<br>
Training a perceptron model on the Iris dataset
[back to top]
Reading-in the Iris data
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# select setosa and versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
# extract sepal length and petal length
X = df.iloc[0:100, [0, 2]].values
# plot data
plt.scatter(X[:50, 0], X[:50, 1],
color='red', marker='o', label='setosa')
plt.scatter(X[50:100, 0], X[50:100, 1],
color='blue', marker='x', label='versicolor')
plt.xlabel('petal length [cm]')
plt.ylabel('sepal length [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./iris_1.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Plotting the Iris data
End of explanation
"""
ppn = Perceptron(eta=0.1, n_iter=10)
ppn.fit(X, y)
plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Number of misclassifications')
plt.tight_layout()
# plt.savefig('./perceptron_1.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Training the perceptron model
End of explanation
"""
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
plot_decision_regions(X, y, classifier=ppn)
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./perceptron_2.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
A function for plotting decision regions
End of explanation
"""
class AdalineGD(object):
"""ADAptive LInear NEuron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
ada1 = AdalineGD(n_iter=10, eta=0.01).fit(X, y)
ax[0].plot(range(1, len(ada1.cost_) + 1), np.log10(ada1.cost_), marker='o')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('log(Sum-squared-error)')
ax[0].set_title('Adaline - Learning rate 0.01')
ada2 = AdalineGD(n_iter=10, eta=0.0001).fit(X, y)
ax[1].plot(range(1, len(ada2.cost_) + 1), ada2.cost_, marker='o')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('Sum-squared-error')
ax[1].set_title('Adaline - Learning rate 0.0001')
plt.tight_layout()
# plt.savefig('./adaline_1.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Adaptive linear neurons and the convergence of learning
[back to top]
Implementing an adaptive linear neuron in Python
End of explanation
"""
# standardize features
X_std = np.copy(X)
X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
ada = AdalineGD(n_iter=15, eta=0.01)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./adaline_2.png', dpi=300)
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Sum-squared-error')
plt.tight_layout()
# plt.savefig('./adaline_3.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Standardizing features and re-training adaline
End of explanation
"""
from numpy.random import seed
class AdalineSGD(object):
"""ADAptive LInear NEuron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
shuffle : bool (default: True)
Shuffles training data every epoch if True to prevent cycles.
random_state : int (default: None)
Set random state for shuffling and initializing the weights.
"""
def __init__(self, eta=0.01, n_iter=10, shuffle=True, random_state=None):
self.eta = eta
self.n_iter = n_iter
self.w_initialized = False
self.shuffle = shuffle
if random_state:
seed(random_state)
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self._initialize_weights(X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
if self.shuffle:
X, y = self._shuffle(X, y)
cost = []
for xi, target in zip(X, y):
cost.append(self._update_weights(xi, target))
avg_cost = sum(cost)/len(y)
self.cost_.append(avg_cost)
return self
def partial_fit(self, X, y):
"""Fit training data without reinitializing the weights"""
if not self.w_initialized:
self._initialize_weights(X.shape[1])
if y.ravel().shape[0] > 1:
for xi, target in zip(X, y):
self._update_weights(xi, target)
else:
self._update_weights(X, y)
return self
def _shuffle(self, X, y):
"""Shuffle training data"""
r = np.random.permutation(len(y))
return X[r], y[r]
def _initialize_weights(self, m):
"""Initialize weights to zeros"""
self.w_ = np.zeros(1 + m)
self.w_initialized = True
def _update_weights(self, xi, target):
"""Apply Adaline learning rule to update the weights"""
output = self.net_input(xi)
error = (target - output)
self.w_[1:] += self.eta * xi.dot(error)
self.w_[0] += self.eta * error
cost = 0.5 * error**2
return cost
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
ada = AdalineSGD(n_iter=15, eta=0.01, random_state=1)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Stochastic Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
#plt.savefig('./adaline_4.png', dpi=300)
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Average Cost')
plt.tight_layout()
# plt.savefig('./adaline_5.png', dpi=300)
plt.show()
ada.partial_fit(X_std[0, :], y[0])
"""
Explanation: <br>
<br>
Large scale machine learning and stochastic gradient descent
[back to top]
End of explanation
"""
|
jrbourbeau/cr-composition | notebooks/legacy/parameter-tuning/RF-parameter-tuning.ipynb | mit | import sys
sys.path.append('/home/jbourbeau/cr-composition')
print('Added to PYTHONPATH')
from __future__ import division, print_function
import argparse
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
import scipy.stats as stats
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import validation_curve, GridSearchCV, cross_val_score, ParameterGrid, KFold
import composition as comp
sns.set_palette('muted')
sns.set_color_codes()
%matplotlib inline
"""
Explanation: Random forest parameter-tuning
Table of contents
Data preprocessing
Validation curves
KS-test tuning
End of explanation
"""
df, cut_dict = comp.load_sim(return_cut_dict=True)
selection_mask = np.array([True] * len(df))
standard_cut_keys = ['lap_reco_success', 'lap_zenith', 'num_hits_1_30', 'IT_signal',
'StationDensity', 'max_qfrac_1_30', 'lap_containment', 'energy_range_lap']
for key in standard_cut_keys:
selection_mask *= cut_dict[key]
df = df[selection_mask]
feature_list, feature_labels = comp.get_training_features()
print('training features = {}'.format(feature_list))
X_train, X_test, y_train, y_test, le = comp.get_train_test_sets(
df, feature_list, train_he=True, test_he=True)
print('number training events = ' + str(y_train.shape[0]))
print('number testing events = ' + str(y_test.shape[0]))
"""
Explanation: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
End of explanation
"""
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, 16)
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train,
y=y_train,
param_name='classifier__max_depth',
param_range=param_range,
cv=10,
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
# plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Maximum depth')
plt.ylabel('Accuracy')
# plt.ylim([0.8, 1.0])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
"""
Explanation: Validation curves
(10-fold CV)
Maximum depth
End of explanation
"""
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, 400, 25)
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train,
y=y_train,
param_name='classifier__min_samples_leaf',
param_range=param_range,
cv=10,
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
# plt.xscale('log')
plt.legend()
plt.xlabel('Minimum samples in leaf node')
plt.ylabel('Accuracy')
# plt.ylim([0.8, 1.0])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
max_depth_list = [2, 8, 10, 20]
fig, axarr = plt.subplots(2,2)
for depth, ax in zip(max_depth_list, axarr.flatten()):
pipeline = get_pipeline('RF')
pipeline.named_steps['classifier'].set_params(max_depth=depth)
pipeline.fit(X_train, y_train)
scaler = pipeline.named_steps['scaler']
clf = pipeline.named_steps['classifier']
X_test_std = scaler.transform(X_test)
plot_decision_regions(X_test_std, y_test, clf, scatter_fraction=None, ax=ax)
ax.set_xlabel('Scaled energy')
ax.set_ylabel('Scaled charge')
ax.set_title('Max depth = {}'.format(depth))
ax.legend()
plt.tight_layout()
plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-decision-regions.png')
pipeline = get_pipeline('RF')
param_range = np.arange(1, 20)
param_grid = {'classifier__max_depth': param_range}
gs = GridSearchCV(estimator=pipeline,
param_grid=param_grid,
scoring='accuracy',
cv=10,
n_jobs=10)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
"""
Explanation: Minimum samples in leaf node
End of explanation
"""
comp_list = ['P', 'He', 'O', 'Fe']
max_depth_list = np.arange(1, 16)
pval_comp = defaultdict(list)
ks_stat = defaultdict(list)
kf = KFold(n_splits=10)
fold_num = 0
for train_index, test_index in kf.split(X_train):
fold_num += 1
print('\r')
print('Fold {}: '.format(fold_num), end='')
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
pval_maxdepth = defaultdict(list)
print('max_depth = ', end='')
for max_depth in max_depth_list:
print('{}...'.format(max_depth), end='')
pipeline = comp.get_pipeline('RF')
pipeline.named_steps['classifier'].set_params(max_depth=max_depth)
pipeline.fit(X_train_fold, y_train_fold)
test_probs = pipeline.predict_proba(X_test_fold)
train_probs = pipeline.predict_proba(X_train_fold)
for class_ in pipeline.classes_:
pval_maxdepth[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
for composition in comp_list:
pval_comp[composition].append(pval_maxdepth[composition])
pval_sys_err = {key: np.std(pval_comp[key], axis=0) for key in pval_comp}
pval = {key: np.mean(pval_comp[key], axis=0) for key in pval_comp}
fig, ax = plt.subplots()
for composition in comp_list:
upper_err = np.copy(pval_sys_err[composition])
upper_err = [val if ((pval[composition][i] + val) < 1) else 1-pval[composition][i] for i, val in enumerate(upper_err)]
lower_err = np.copy(pval_sys_err[composition])
lower_err = [val if ((pval[composition][i] - val) > 0) else pval[composition][i] for i, val in enumerate(lower_err)]
ax.errorbar(max_depth_list, pval[composition],
yerr=[lower_err, upper_err],
marker='.', linestyle=':',
label=composition, alpha=0.75)
plt.ylabel('KS-test p-value')
plt.xlabel('Maximum depth')
plt.ylim([-0.1, 1.1])
leg = plt.legend()
# # set the linewidth of each legend object
# for legobj in leg.legendHandles:
# legobj.set_linestyle('-')
# legobj.set_linewidth(3.0)
# # legobj.set_dashes('None')
plt.grid()
plt.show()
legobj.
"""
Explanation: KS-test tuning
Maximum depth
End of explanation
"""
comp_list = np.unique(df['MC_comp'])
min_samples_list = np.arange(1, 400, 25)
pval = defaultdict(list)
ks_stat = defaultdict(list)
print('min_samples_leaf = ', end='')
for min_samples_leaf in min_samples_list:
print('{}...'.format(min_samples_leaf), end='')
pipeline = comp.get_pipeline('RF')
params = {'max_depth': 4, 'min_samples_leaf': min_samples_leaf}
pipeline.named_steps['classifier'].set_params(**params)
pipeline.fit(X_train, y_train)
test_probs = pipeline.predict_proba(X_test)
train_probs = pipeline.predict_proba(X_train)
for class_ in pipeline.classes_:
pval[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
fig, ax = plt.subplots()
for composition in pval:
ax.plot(min_samples_list, pval[composition], linestyle='-.', label=composition)
plt.ylabel('KS-test p-value')
plt.xlabel('Minimum samples leaf node')
plt.legend()
plt.grid()
plt.show()
"""
Explanation: Minimum samples in leaf node
End of explanation
"""
comp_list = np.unique(df['MC_comp'])
min_samples_list = [1, 25, 50, 75]
# min_samples_list = [1, 100, 200, 300]
fig, axarr = plt.subplots(2, 2, sharex=True, sharey=True)
print('min_samples_leaf = ', end='')
for min_samples_leaf, ax in zip(min_samples_list, axarr.flatten()):
print('{}...'.format(min_samples_leaf), end='')
max_depth_list = np.arange(1, 16)
pval = defaultdict(list)
ks_stat = defaultdict(list)
for max_depth in max_depth_list:
pipeline = comp.get_pipeline('RF')
params = {'max_depth': max_depth, 'min_samples_leaf': min_samples_leaf}
pipeline.named_steps['classifier'].set_params(**params)
pipeline.fit(X_train, y_train)
test_probs = pipeline.predict_proba(X_test)
train_probs = pipeline.predict_proba(X_train)
for class_ in pipeline.classes_:
pval[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
for composition in pval:
ax.plot(max_depth_list, pval[composition], linestyle='-.', label=composition)
ax.set_ylabel('KS-test p-value')
ax.set_xlabel('Maximum depth')
ax.set_title('min samples = {}'.format(min_samples_leaf))
ax.set_ylim([0, 0.5])
ax.legend()
ax.grid()
plt.tight_layout()
plt.show()
"""
Explanation: Maximum depth for various minimum samples in leaf node
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_stats_cluster_time_frequency_repeated_measures_anova.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
"""
Explanation: Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
"""
Explanation: Set parameters
End of explanation
"""
epochs.equalize_event_counts(event_id)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = freqs / freqs[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
"""
Explanation: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
End of explanation
"""
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, freqs, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
"""
Explanation: Create TFR representations for all conditions
End of explanation
"""
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] // n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_freqs = len(freqs)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
"""
Explanation: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
End of explanation
"""
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_freqs * n_times)
# so we have replications * conditions * observations:
print(data.shape)
"""
Explanation: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
End of explanation
"""
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], freqs[0], freqs[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
"""
Explanation: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location):
.. table:: Sample data layout
===== ==== ==== ==== ====
trial A1B1 A1B2 A2B1 B2B2
===== ==== ==== ==== ====
1 1.34 2.53 0.97 1.74
... ... ... ... ...
56 2.45 7.90 3.09 4.76
===== ==== ==== ==== ====
Now we're ready to run our repeated measures ANOVA.
Note. As we treat trials as subjects, the test only accounts for
time locked responses despite the 'induced' approach.
For analysis for induced power at the group level averaged TRFs
are required.
End of explanation
"""
effects = 'A:B'
"""
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
End of explanation
"""
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
"""
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and
the second dimension and finally calls the ANOVA function.
End of explanation
"""
good_clusters = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusters)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
"""
Explanation: Create new stats image with only significant clusters:
End of explanation
"""
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
"""
Explanation: Now using FDR:
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/deterministics.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.rc("figure", figsize=(16, 9))
plt.rc("font", size=16)
"""
Explanation: Deterministic Terms in Time Series Models
End of explanation
"""
from statsmodels.tsa.deterministic import DeterministicProcess
index = pd.RangeIndex(0, 100)
det_proc = DeterministicProcess(index, constant=True, order=1, seasonal=True, period=5)
det_proc.in_sample()
"""
Explanation: Basic Use
Basic configurations can be directly constructed through DeterministicProcess. These can include a constant, a time trend of any order, and either a seasonal or a Fourier component.
The process requires an index, which is the index of the full-sample (or in-sample).
First, we initialize a deterministic process with a constant, a linear time trend, and a 5-period seasonal term. The in_sample method returns the full set of values that match the index.
End of explanation
"""
det_proc.out_of_sample(15)
"""
Explanation: The out_of_sample returns the next steps values after the end of the in-sample.
End of explanation
"""
det_proc.range(190, 210)
"""
Explanation: range(start, stop) can also be used to produce the deterministic terms over any range including in- and out-of-sample.
Notes
When the index is a pandas DatetimeIndex or a PeriodIndex, then start and stop can be date-like (strings, e.g., "2020-06-01", or Timestamp) or integers.
stop is always included in the range. While this is not very Pythonic, it is needed since both statsmodels and Pandas include stop when working with date-like slices.
End of explanation
"""
index = pd.period_range("2020-03-01", freq="M", periods=60)
det_proc = DeterministicProcess(index, constant=True, fourier=2)
det_proc.in_sample().head(12)
det_proc.out_of_sample(12)
"""
Explanation: Using a Date-like Index
Next, we show the same steps using a PeriodIndex.
End of explanation
"""
det_proc.range("2025-01", "2026-01")
"""
Explanation: range accepts date-like arguments, which are usually given as strings.
End of explanation
"""
det_proc.range(58, 70)
"""
Explanation: This is equivalent to using the integer values 58 and 70.
End of explanation
"""
from statsmodels.tsa.deterministic import Fourier, Seasonality, TimeTrend
index = pd.period_range("2020-03-01", freq="D", periods=2 * 365)
tt = TimeTrend(constant=True)
four = Fourier(period=365.25, order=2)
seas = Seasonality(period=7)
det_proc = DeterministicProcess(index, additional_terms=[tt, seas, four])
det_proc.in_sample().head(28)
"""
Explanation: Advanced Construction
Deterministic processes with features not supported directly through the constructor can be created using additional_terms which accepts a list of DetermisticTerm. Here we create a deterministic process with two seasonal components: day-of-week with a 5 day period and an annual captured through a Fourier component with a period of 365.25 days.
End of explanation
"""
from statsmodels.tsa.deterministic import DeterministicTerm
class BrokenTimeTrend(DeterministicTerm):
def __init__(self, break_period: int):
self._break_period = break_period
def __str__(self):
return "Broken Time Trend"
def _eq_attr(self):
return (self._break_period,)
def in_sample(self, index: pd.Index):
nobs = index.shape[0]
terms = np.zeros((nobs, 2))
terms[self._break_period :, 0] = 1
terms[self._break_period :, 1] = np.arange(self._break_period + 1, nobs + 1)
return pd.DataFrame(terms, columns=["const_break", "trend_break"], index=index)
def out_of_sample(
self, steps: int, index: pd.Index, forecast_index: pd.Index = None
):
# Always call extend index first
fcast_index = self._extend_index(index, steps, forecast_index)
nobs = index.shape[0]
terms = np.zeros((steps, 2))
# Assume break period is in-sample
terms[:, 0] = 1
terms[:, 1] = np.arange(nobs + 1, nobs + steps + 1)
return pd.DataFrame(
terms, columns=["const_break", "trend_break"], index=fcast_index
)
btt = BrokenTimeTrend(60)
tt = TimeTrend(constant=True, order=1)
index = pd.RangeIndex(100)
det_proc = DeterministicProcess(index, additional_terms=[tt, btt])
det_proc.range(55, 65)
"""
Explanation: Custom Deterministic Terms
The DetermisticTerm Abstract Base Class is designed to be subclassed to help users write custom deterministic terms. We next show two examples. The first is a broken time trend that allows a break after a fixed number of periods. The second is a "trick" deterministic term that allows exogenous data, which is not really a deterministic process, to be treated as if was deterministic. This lets use simplify gathering the terms needed for forecasting.
These are intended to demonstrate the construction of custom terms. They can definitely be improved in terms of input validation.
End of explanation
"""
class ExogenousProcess(DeterministicTerm):
def __init__(self, data):
self._data = data
def __str__(self):
return "Custom Exog Process"
def _eq_attr(self):
return (id(self._data),)
def in_sample(self, index: pd.Index):
return self._data.loc[index]
def out_of_sample(
self, steps: int, index: pd.Index, forecast_index: pd.Index = None
):
forecast_index = self._extend_index(index, steps, forecast_index)
return self._data.loc[forecast_index]
import numpy as np
gen = np.random.default_rng(98765432101234567890)
exog = pd.DataFrame(gen.integers(100, size=(300, 2)), columns=["exog1", "exog2"])
exog.head()
ep = ExogenousProcess(exog)
tt = TimeTrend(constant=True, order=1)
# The in-sample index
idx = exog.index[:200]
det_proc = DeterministicProcess(idx, additional_terms=[tt, ep])
det_proc.in_sample().head()
det_proc.out_of_sample(10)
"""
Explanation: Next, we write a simple "wrapper" for some actual exogenous data that simplifies constructing out-of-sample exogenous arrays for forecasting.
End of explanation
"""
gen = np.random.default_rng(98765432101234567890)
idx = pd.RangeIndex(200)
det_proc = DeterministicProcess(idx, constant=True, period=52, fourier=2)
det_terms = det_proc.in_sample().to_numpy()
params = np.array([1.0, 3, -1, 4, -2])
exog = det_terms @ params
y = np.empty(200)
y[0] = det_terms[0] @ params + gen.standard_normal()
for i in range(1, 200):
y[i] = 0.9 * y[i - 1] + det_terms[i] @ params + gen.standard_normal()
y = pd.Series(y, index=idx)
ax = y.plot()
"""
Explanation: Model Support
The only model that directly supports DeterministicProcess is AutoReg. A custom term can be set using the deterministic keyword argument.
Note: Using a custom term requires that trend="n" and seasonal=False so that all deterministic components must come from the custom deterministic term.
Simulate Some Data
Here we simulate some data that has an weekly seasonality captured by a Fourier series.
End of explanation
"""
from statsmodels.tsa.api import AutoReg
mod = AutoReg(y, 1, trend="n", deterministic=det_proc)
res = mod.fit()
print(res.summary())
"""
Explanation: The model is then fit using the deterministic keyword argument. seasonal defaults to False but trend defaults to "c" so this needs to be changed.
End of explanation
"""
fig = res.plot_predict(200, 200 + 2 * 52, True)
auto_reg_forecast = res.predict(200, 211)
auto_reg_forecast
"""
Explanation: We can use the plot_predict to show the predicted values and their prediction interval. The out-of-sample deterministic values are automatically produced by the deterministic process passed to AutoReg.
End of explanation
"""
from statsmodels.tsa.api import SARIMAX
det_proc = DeterministicProcess(idx, period=52, fourier=2)
det_terms = det_proc.in_sample()
mod = SARIMAX(y, order=(1, 0, 0), trend="c", exog=det_terms)
res = mod.fit(disp=False)
print(res.summary())
"""
Explanation: Using with other models
Other models do not support DeterministicProcess directly. We can instead manually pass any deterministic terms as exog to model that support exogenous values.
Note that SARIMAX with exogenous variables is OLS with SARIMA errors so that the model is
$$
\begin{align}
\nu_t & = y_t - x_t \beta \
(1-\phi(L))\nu_t & = (1+\theta(L))\epsilon_t.
\end{align}
$$
The parameters on deterministic terms are not directly comparable to AutoReg which evolves according to the equation
$$
(1-\phi(L)) y_t = x_t \beta + \epsilon_t.
$$
When $x_t$ contains only deterministic terms, these two representation are equivalent (assuming $\theta(L)=0$ so that there is no MA).
End of explanation
"""
sarimax_forecast = res.forecast(12, exog=det_proc.out_of_sample(12))
df = pd.concat([auto_reg_forecast, sarimax_forecast], axis=1)
df.columns = columns = ["AutoReg", "SARIMAX"]
df
"""
Explanation: The forecasts are similar but differ since the parameters of the SARIMAX are estimated using MLE while AutoReg uses OLS.
End of explanation
"""
|
gojomo/gensim | docs/notebooks/soft_cosine_tutorial.ipynb | lgpl-2.1 | # Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Finding similar documents with Word2Vec and Soft Cosine Measure
Soft Cosine Measure (SCM) [1, 3] is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. In part 1, we will show how you can compute SCM between two documents using the inner_product method. In part 2, we will use SoftCosineSimilarity to retrieve documents most similar to a query and compare the performance against other similarity measures.
First, however, we go through the basics of what Soft Cosine Measure is.
Soft Cosine Measure basics
Soft Cosine Measure (SCM) is a method that allows us to assess the similarity between two documents in a meaningful way, even when they have no words in common. It uses a measure of similarity between words, which can be derived [2] using word2vec [4] vector embeddings of words. It has been shown to outperform many of the state-of-the-art methods in the semantic text similarity task in the context of community question answering [2].
SCM is illustrated below for two very similar sentences. The sentences have no words in common, but by modeling synonymy, SCM is able to accurately measure the similarity between the two sentences. The method also uses the bag-of-words vector representation of the documents (simply put, the word's frequencies in the documents). The intution behind the method is that we compute standard cosine similarity assuming that the document vectors are expressed in a non-orthogonal basis, where the angle between two basis vectors is derived from the angle between the word2vec embeddings of the corresponding words.
This method was perhaps first introduced in the article “Soft Measure and Soft Cosine Measure: Measure of Features in Vector Space Model” by Grigori Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto (link to PDF).
In this tutorial, we will learn how to use Gensim's SCM functionality, which consists of the inner_product method for one-off computation, and the SoftCosineSimilarity class for corpus-based similarity queries.
Note:
If you use this software, please consider citing [1], [2], and [3].
Running this notebook
You can download this Jupyter notebook, and run it on your own computer, provided you have installed the gensim, jupyter, sklearn, pyemd, and wmd Python packages.
The notebook was run on an Ubuntu machine with an Intel core i7-6700HQ CPU 3.10GHz (4 cores) and 16 GB memory. Assuming all resources required by the notebook have already been downloaded, running the entire notebook on this machine takes about 30 minutes.
End of explanation
"""
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
sentence_orange = 'Having a tough time finding an orange juice press machine?'.lower().split()
"""
Explanation: Part 1: Computing the Soft Cosine Measure
To use SCM, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will use pre-trained word2vec embeddings.
Let's create some sentences to compare.
End of explanation
"""
!pip install nltk
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
sentence_orange = [w for w in sentence_orange if w not in stop_words]
# Prepare a dictionary and a corpus.
from gensim import corpora
documents = [sentence_obama, sentence_president, sentence_orange]
dictionary = corpora.Dictionary(documents)
# Convert the sentences into bag-of-words vectors.
sentence_obama = dictionary.doc2bow(sentence_obama)
sentence_president = dictionary.doc2bow(sentence_president)
sentence_orange = dictionary.doc2bow(sentence_orange)
"""
Explanation: The first two sentences have very similar content, and as such the SCM should be large. Before we compute the SCM, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
End of explanation
"""
%%time
import gensim.downloader as api
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
w2v_model = api.load("glove-wiki-gigaword-50")
similarity_index = WordEmbeddingSimilarityIndex(w2v_model)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
"""
Explanation: Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the inner_product method.
End of explanation
"""
similarity = similarity_matrix.inner_product(sentence_obama, sentence_president, normalized=True)
print('similarity = %.4f' % similarity)
"""
Explanation: Let's compute SCM using the inner_product method.
End of explanation
"""
similarity = similarity_matrix.inner_product(sentence_obama, sentence_orange, normalized=True)
print('similarity = %.4f' % similarity)
"""
Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the similarity is smaller.
End of explanation
"""
%%time
from itertools import chain
import json
from re import sub
from os.path import isfile
import gensim.downloader as api
from gensim.utils import simple_preprocess
from nltk.corpus import stopwords
from nltk import download
download("stopwords") # Download stopwords list.
stopwords = set(stopwords.words("english"))
def preprocess(doc):
doc = sub(r'<img[^<>]+(>|$)', " image_token ", doc)
doc = sub(r'<[^<>]+(>|$)', " ", doc)
doc = sub(r'\[img_assist[^]]*?\]', " ", doc)
doc = sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', " url_token ", doc)
return [token for token in simple_preprocess(doc, min_len=0, max_len=float("inf")) if token not in stopwords]
corpus = list(chain(*[
chain(
[preprocess(thread["RelQuestion"]["RelQSubject"]), preprocess(thread["RelQuestion"]["RelQBody"])],
[preprocess(relcomment["RelCText"]) for relcomment in thread["RelComments"]])
for thread in api.load("semeval-2016-2017-task3-subtaskA-unannotated")]))
print("Number of documents: %d" % len(corpus))
"""
Explanation: Part 2: Similarity queries using SoftCosineSimilarity
You can use SCM to get the most similar documents to a query, using the SoftCosineSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.
Qatar Living unannotated dataset
Contestants solving the community question answering task in the SemEval 2016 and 2017 competitions had an unannotated dataset of 189,941 questions and 1,894,456 comments from the Qatar Living discussion forums. As our first step, we will use the same dataset to build a corpus.
End of explanation
"""
%%time
from multiprocessing import cpu_count
from gensim.corpora import Dictionary
from gensim.models import TfidfModel
from gensim.models import Word2Vec
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
dictionary = Dictionary(corpus)
tfidf = TfidfModel(dictionary=dictionary)
w2v_model = Word2Vec(corpus, workers=cpu_count(), min_count=5, size=300, seed=12345)
similarity_index = WordEmbeddingSimilarityIndex(w2v_model.wv)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary, tfidf, nonzero_limit=100)
"""
Explanation: Using the corpus we have just build, we will now construct a dictionary, a TF-IDF model, a word2vec model, and a term similarity matrix.
End of explanation
"""
datasets = api.load("semeval-2016-2017-task3-subtaskBC")
"""
Explanation: Evaluation
Next, we will load the validation and test datasets that were used by the SemEval 2016 and 2017 contestants. The datasets contain 208 original questions posted by the forum members. For each question, there is a list of 10 threads with a human annotation denoting whether or not the thread is relevant to the original question. Our task will be to order the threads so that relevant threads rank above irrelevant threads.
End of explanation
"""
!pip install wmd
!pip install sklearn
!pip install pyemd
from math import isnan
from time import time
from gensim.similarities import MatrixSimilarity, WmdSimilarity, SoftCosineSimilarity
import numpy as np
from sklearn.model_selection import KFold
from wmd import WMD
def produce_test_data(dataset):
for orgquestion in datasets[dataset]:
query = preprocess(orgquestion["OrgQSubject"]) + preprocess(orgquestion["OrgQBody"])
documents = [
preprocess(thread["RelQuestion"]["RelQSubject"]) + preprocess(thread["RelQuestion"]["RelQBody"])
for thread in orgquestion["Threads"]]
relevance = [
thread["RelQuestion"]["RELQ_RELEVANCE2ORGQ"] in ("PerfectMatch", "Relevant")
for thread in orgquestion["Threads"]]
yield query, documents, relevance
def cossim(query, documents):
# Compute cosine similarity between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = MatrixSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
num_features=len(dictionary))
similarities = index[query]
return similarities
def softcossim(query, documents):
# Compute Soft Cosine Measure between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = SoftCosineSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
similarity_matrix)
similarities = index[query]
return similarities
def wmd_gensim(query, documents):
# Compute Word Mover's Distance as implemented in PyEMD by William Mayner
# between the query and the documents.
index = WmdSimilarity(documents, w2v_model)
similarities = index[query]
return similarities
def wmd_relax(query, documents):
# Compute Word Mover's Distance as implemented in WMD by Source{d}
# between the query and the documents.
words = [word for word in set(chain(query, *documents)) if word in w2v_model.wv]
indices, words = zip(*sorted((
(index, word) for (index, _), word in zip(dictionary.doc2bow(words), words))))
query = dict(tfidf[dictionary.doc2bow(query)])
query = [
(new_index, query[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in query]
documents = [dict(tfidf[dictionary.doc2bow(document)]) for document in documents]
documents = [[
(new_index, document[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in document] for document in documents]
embeddings = np.array([w2v_model.wv[word] for word in words], dtype=np.float32)
nbow = dict(((index, list(chain([None], zip(*document)))) for index, document in enumerate(documents)))
nbow["query"] = tuple([None] + list(zip(*query)))
distances = WMD(embeddings, nbow, vocabulary_min=1).nearest_neighbors("query")
similarities = [-distance for _, distance in sorted(distances)]
return similarities
strategies = {
"cossim" : cossim,
"softcossim": softcossim,
"wmd-gensim": wmd_gensim,
"wmd-relax": wmd_relax}
def evaluate(split, strategy):
# Perform a single round of evaluation.
results = []
start_time = time()
for query, documents, relevance in split:
similarities = strategies[strategy](query, documents)
assert len(similarities) == len(documents)
precision = [
(num_correct + 1) / (num_total + 1) for num_correct, num_total in enumerate(
num_total for num_total, (_, relevant) in enumerate(
sorted(zip(similarities, relevance), reverse=True)) if relevant)]
average_precision = np.mean(precision) if precision else 0.0
results.append(average_precision)
return (np.mean(results) * 100, time() - start_time)
def crossvalidate(args):
# Perform a cross-validation.
dataset, strategy = args
test_data = np.array(list(produce_test_data(dataset)))
kf = KFold(n_splits=10)
samples = []
for _, test_index in kf.split(test_data):
samples.append(evaluate(test_data[test_index], strategy))
return (np.mean(samples, axis=0), np.std(samples, axis=0))
%%time
from multiprocessing import Pool
args_list = [
(dataset, technique)
for dataset in ("2016-test", "2017-test")
for technique in ("softcossim", "wmd-gensim", "wmd-relax", "cossim")]
with Pool() as pool:
results = pool.map(crossvalidate, args_list)
"""
Explanation: Finally, we will perform an evaluation to compare three unsupervised similarity measures – the Soft Cosine Measure, two different implementations of the Word Mover's Distance, and standard cosine similarity. We will use the Mean Average Precision (MAP) as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.
End of explanation
"""
from IPython.display import display, Markdown
output = []
baselines = [
(("2016-test", "**Winner (UH-PRHLT-primary)**"), ((76.70, 0), (0, 0))),
(("2016-test", "**Baseline 1 (IR)**"), ((74.75, 0), (0, 0))),
(("2016-test", "**Baseline 2 (random)**"), ((46.98, 0), (0, 0))),
(("2017-test", "**Winner (SimBow-primary)**"), ((47.22, 0), (0, 0))),
(("2017-test", "**Baseline 1 (IR)**"), ((41.85, 0), (0, 0))),
(("2017-test", "**Baseline 2 (random)**"), ((29.81, 0), (0, 0)))]
table_header = ["Dataset | Strategy | MAP score | Elapsed time (sec)", ":---|:---|:---|---:"]
for row, ((dataset, technique), ((mean_map_score, mean_duration), (std_map_score, std_duration))) \
in enumerate(sorted(chain(zip(args_list, results), baselines), key=lambda x: (x[0][0], -x[1][0][0]))):
if row % (len(strategies) + 3) == 0:
output.extend(chain(["\n"], table_header))
map_score = "%.02f ±%.02f" % (mean_map_score, std_map_score)
duration = "%.02f ±%.02f" % (mean_duration, std_duration) if mean_duration else ""
output.append("%s|%s|%s|%s" % (dataset, technique, map_score, duration))
display(Markdown('\n'.join(output)))
"""
Explanation: The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset.
End of explanation
"""
|
usantamaria/iwi131 | ipynb/15-FuncionesAvanzadas/FuncionesAvanzadas.ipynb | cc0-1.0 | from math import exp, cos, pi, sin
from random import randrange, choice
from turtle import * # Evitar
print exp(5.5)
print cos(pi / 2)
print randrange(10)
print choice(['Lunes', 'Martes', 'Viernes'])
"""
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
http://progra.usm.cl/
https://www.github.com/usantamaria/iwi131
Feedback de las encuestas
Lo bueno
Github con contenido actualizado
Ipython notebook slides son claras
Participación en clases
Ejercicios por alumnos, no por profesor
Minitareas
Teclado inalámbrico
Feedback de las encuestas
Lo malo
Certamen en papel: Completamente de acuerdo, pero dificil de cambiar.
Muy temprano en la mañana: Completamente de acuerdo, pero dificil de cambiar.
Limitación de aplicar material avanzado: Completamente de acuerdo, pero dificil de cambiar.
Grupos aleatorios: En evaluación.
Clases sin computadores: ¡Traigan su notebook!
Atrasados responden preguntas: ¡Lleguen temprano!
Ruido en clase: Completamente de acuerdo. ¡Sean más silenciosos!
Problemas son muy distintos al certamen: Enseñanza para el aprendizaje, no para el test.
Feedback del certamen
¿Qué tal el certamen 1?
Minuto de confianza.
¿Qué pudimos haber hecho mejor?
¿Qué cambiamos para el resto del semestre?
¿Qué fue lo fácil?
¿Qué fue lo difícil?
Hagan sus predicciones.
Próximas evaluaciones
Lu 7 Dic 8 am: Actividad 3
Lu 21 Dic 8 am: Actividad 4
Lu 21 Dic 7 pm: Certamen 2
Lu 5 Ene 8 am: Actividad 5
Vie 8 Ene 3:30 pm: Certamen 3
Lu 18 Ene 8 am: Certamen Recuperativo
¿Qué contenido aprenderemos hoy?
Módulos
Funciones avanzadas
¿Porqué aprenderemos ese contenido?
Módulos
Funciones avanzadas
Porque facilitan la utilización de funciones y es necesario saberlo para utilizar código creado por terceras personas.
1. Módulos o Librerías
Un módulo es un archivo python donde se han definido valores y parámetros:
* Puede ser propio
* Puede ser hecho por terceros.
1. Módulos o Librerías
1.1 Importando una función de una librería
Importando funciones específicas:
from modulo_a_usar import funcion_de_interes
Luego funcion_de_interes puede utilizarse directamente.
End of explanation
"""
import math
import random
print math.exp(5.5)
print math.cos(math.pi / 2)
print random.randrange(10)
print random.choice(['Lunes', 'Martes', 'Viernes'])
"""
Explanation: 1. Módulos o Librerías
1.2 Importando todo un módulo
Para importar todo un módulo:
import modulo_a_usar
Luego la funcion_de_interes puede utilizarse utilizando modulo_a_usar.funcion_de_interes
End of explanation
"""
def convertir_segundos(segundos):
horas = segundos / (60 * 60)
minutos = (segundos / 60) % 60
segundos = segundos % 60
return horas, minutos, segundos
h, m, s = convertir_segundos(9814)
print s
"""
Explanation: 1. Módulos o librerías
1.3 Creación de Módulos
Cree un módulo que calcule el área de figuras geométricas.
Utilizando el módulo realice los siguientes programas:
1. El primero pregunta el numero de lados y luego la arista, y entrega el área.
2. El segundo pregunta el lado de un cuadrado, y la altura, y luego entrega el volumen de la pirámide.
(Ver archivos adjuntos)
2. Funciones Avanzadas
Múltiples valores de retorno
Ningún valor de retorno
Valores por omisión
Función como argumento
Funciones recursivas
2.1 Funciones con múltiples valores de retorno
Simplemente retorne valores separados por comas.
Esto es extremadamente útil cuando en la función se generan distintos valores simultáneamente.
End of explanation
"""
def minmax():
N = int(raw_input("Cuantos números: "))
j = 1
# FIX ME
while j<=N:
x = float(raw_input("Numero "+str(j)+": "))
# FIX ME
j += 1
return mmin, mmax
# FIX ME
print "Minimo:", mimin
print "Maximo:", mimax
"""
Explanation: 2.1 Funciones con múltiples valores de retorno
Ejercicio
Realice una función que entregue simultáneamente el mínimo y el máximo de valores entregados por el usuario.
End of explanation
"""
def minmax():
N = int(raw_input("Cuantos números: "))
j = 1
mi_min = +float("inf")
mi_max = -float("inf")
while j<=N:
x = float(raw_input("Numero "+str(j)+": "))
mi_min = min(x,mi_min)
mi_max = max(x,mi_max)
j += 1
return mi_min, mi_max
mmin, mmax = minmax()
print "Minimo:", mmin
print "Maximo:", mmax
"""
Explanation: 2.1 Funciones con múltiples valores de retorno
Solución
Realice una función que entregue el mínimo y el máximo de valores entregados por el usuario.
End of explanation
"""
def imprimir_datos(nombre, apellido, rol, dia, mes, anno):
print ''
print 'Nombre completo:', nombre, apellido
print 'Rol:', rol
print 'Fecha de nacimiento:',
print dia, '/', mes, '/', anno
imprimir_datos('Perico', 'Los Palotes', '201101001-1', 3, 1, 1993)
imprimir_datos('Yayita', 'Vinagre', '201101002-2', 10, 9, 1992)
imprimir_datos('Fulano', 'De Tal', '201101003-3', 14, 5, 1990)
"""
Explanation: 2.2 Funciones sin valor de retorno
Simplemente no retorne valores.
Esto es extremadamente útil cuando en la función hace algo pero no genera valores.
End of explanation
"""
def codigo_palabra(palabra):
codigo=''
# FIX ME
while i<=len(palabra):
# FIX ME
print codigo
codigo_palabra('aczaarltp')
codigo_palabra('axruatgrrreov')
"""
Explanation: 2.2 Funciones sin valor de retorno
Ejercicio
Crear una función codigo_palabra(codigo) que reciba un código encriptado de sólo letras
e imprima el mensaje desencriptado. La regla de desencriptación es la siguiente: la palabra
desencriptada se obtiene recorriendo desde el final de la palabra hasta el comienzo, considerando
solo las letras en ubicaciones impares. Empezando desde la ultima letra.
codigo_palabra('aczaarltp')
plaza
codigo_palabra('axruatgrrreov')
vergara
End of explanation
"""
def codigo_palabra(palabra):
n = len(palabra)-1
codigo = ""
while n>=0:
codigo += palabra[n]
n -= 2
print codigo
codigo_palabra('aczaarltp')
codigo_palabra('axruatgrrreov')
"""
Explanation: 2.2 Funciones sin valor de retorno
Solución v1
End of explanation
"""
def codigo_palabra(palabra):
j = 1
codigo = ""
while j<=len(palabra):
codigo += palabra[-j]
j += 2
print codigo
codigo_palabra('aczaarltp')
codigo_palabra('axruatgrrreov')
"""
Explanation: 2.2 Funciones sin valor de retorno
Solución v2
End of explanation
"""
def f(a, b=2, c="toma b"):
if c=="toma b":
c = b
return a + b*c
print f(10, b=1, c="toma b") # Probar f(10), f(10,2), f(10,15), etc.
"""
Explanation: 2.3 Parámetros con valores por omisión
Defina las variables y asigne valores por defecto (por omisión)
Esto es extremadamente util cuando en la función puede tener un número variable de argumentos.
End of explanation
"""
# Import library
# Definir funcion
# Casos a utilizar
print perimetro(0.5, 2)
print perimetro(0.5, 3)
print perimetro(0.5, 4)
print perimetro(0.5, 10)
print perimetro(0.5, 20)
print perimetro(0.5, 1000)
print perimetro(0.5)
# Importar modulos necesarios
import math
# Definir funcion
def perimetro(r, n="c"):
if n=="c":
return 2*math.pi*r
elif n<=2:
print "Numero incorrecto de lados"
else:
return 2*r*n*math.tan(math.pi/n)
# Casos a utilizar
print perimetro(0.5, 2)
print perimetro(0.5, 3)
print perimetro(0.5, 4)
print perimetro(0.5, 10)
print perimetro(0.5, 20)
print perimetro(0.5, 1000)
print perimetro(0.5)
"""
Explanation: 2.3 Parámetros con valores por omisión
Ejercicio
Cree una función que regresa el perimetro de un polígono en función del radio $r$ y el número de lados $n$:
$$P = 2 \ n \ r \tan\Big(\frac{\pi}{n}\Big)$$
* Si no se ingresa el número de lados, regresa el perimetro de un círculo.
* Si se ingresa un número de lados menor a 3, debe imprimir "Numero incorrecto de lados" y no regresar nada.
End of explanation
"""
def sumar(n, f):
s = 0
i = 0
while i<=n:
s = s + f(i)
i += 1
return s
def identidad(x):
return x
def cuadrado(x):
return x ** 2
def cubo(x):
return x ** 3
print sumar(1000, identidad)
print sumar(1000, cuadrado)
print sumar(1000, cubo)
"""
Explanation: 2.4 Funciones como parámetros
Las funciones también son un objeto (tipo básico) en python, y pueden pasarse a otra función como argumento.
Esto es útil cuando queremos modificar el comportamiento de una función o queremos cambiar un tipo de evaluación.
End of explanation
"""
def aplicar_funcion(a, b, c, funcion):
# FIX ME
# FIX ME
return
def suma(x,y):
return x+y
def multiplicar(x,y):
return x*y
print aplicar_funcion(1, 2, 3, suma)
print aplicar_funcion(1, 2, 3, multiplicar)
print aplicar_funcion("a", "b", "c", suma)
print aplicar_funcion("a", 2, 3, multiplicar)
"""
Explanation: 2.4 Funciones como parámetros
Ejercicio
Crear una función que reciba 3 números y una función.
* Si la función utilizada como argumento es la suma, se debe regresar la suma de los 3 números.
* Si la función utilizada es la multiplicación, se debe regresar la multiplicación de los 3 números.
End of explanation
"""
def aplicar_funcion(a, b, c, funcion):
ab = funcion(a,b)
abc = funcion(ab, c)
return abc
def suma(x,y):
return x+y
def multiplicar(x,y):
return x*y
print aplicar_funcion(1, 2, 3, suma)
print aplicar_funcion(1, 2, 3, multiplicar)
print aplicar_funcion("a", "b", "c", suma)
print aplicar_funcion("a", 2, 3, multiplicar)
"""
Explanation: 2.4 Funciones como parámetros
Solución
End of explanation
"""
def factorial(n):
if n <= 0:
return 1
else:
return factorial(n - 1) * n
print factorial(2)
print factorial(10)
"""
Explanation: 2.5 Funciones recursivas
Una función puede llamar a otra función, incluso a sí misma.
Útil cuando la relación se explica en términos de sí misma.
¡Es crucial definir una condición de detención!
End of explanation
"""
def suma_digitos(n):
n_str = str(n)
if len(n_str)==1:
# FIX ME
else:
# FIX ME
print suma_digitos(234)
"""
Explanation: 2.5 Funciones recursivas
Ejercicio
Cree una funcion que sume los dígitos de un número de manera recursiva:
Por ejemplo, 234 deberia entregar 2+3+4= 9
End of explanation
"""
def suma_digitos(n):
n_str = str(n)
if len(n_str)==1:
return n
else:
primer_digito = n_str[0]
otros_digitos = n_str[1:]
return int(primer_digito) + suma_digitos(int(otros_digitos))
print suma_digitos(234)
"""
Explanation: 2.5 Funciones recursivas
Solución
End of explanation
"""
|
deeplook/notebooks | color_scheme_3d/visualising_color_schemes_3d.ipynb | mit | from collections import OrderedDict
# values entered manually from https://brandlive.here.com/colors
here_primary_cols = OrderedDict(
HERE_Aqua = '#48dad0',
HERE_Aqua_UNKNOWN = '#00908a', # unknown status, maybe an error?
HERE_Aqua_Dark = '#00afaa',
HERE_Aqua_75 = '#76e3dc',
HERE_Aqua_50 = '#a3ece7',
HERE_Aqua_25 = '#d1f6f3',
HERE_Gray = '#383c45',
HERE_Gray_Dark = '#0f1621',
HERE_Gray_75 = '#6a6d74',
HERE_Gray_50 = '#9b9da2',
HERE_Gray_25 = '#cdced0',
HERE_Gray_00 = '#ffffff' # a.k.a. white
)
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Motivation" data-toc-modified-id="Motivation-1"><span class="toc-item-num">1 </span>Motivation</a></span><ul class="toc-item"><li><span><a href="#Shades-of-HERE-Aqua-and-HERE-Gray" data-toc-modified-id="Shades-of-HERE-Aqua-and-HERE-Gray-1.1"><span class="toc-item-num">1.1 </span>Shades of HERE Aqua and HERE Gray</a></span></li></ul></li><li><span><a href="#Discover-Ipyvolume" data-toc-modified-id="Discover-Ipyvolume-2"><span class="toc-item-num">2 </span>Discover Ipyvolume</a></span></li><li><span><a href="#Convert-Hex-to-RGB-Colors" data-toc-modified-id="Convert-Hex-to-RGB-Colors-3"><span class="toc-item-num">3 </span>Convert Hex to RGB Colors</a></span></li><li><span><a href="#Plot-Color-Schemes" data-toc-modified-id="Plot-Color-Schemes-4"><span class="toc-item-num">4 </span>Plot Color Schemes</a></span><ul class="toc-item"><li><span><a href="#All-'Scraped'-Colors" data-toc-modified-id="All-'Scraped'-Colors-4.1"><span class="toc-item-num">4.1 </span>All 'Scraped' Colors</a></span></li><li><span><a href="#Secondary-Colors-Only" data-toc-modified-id="Secondary-Colors-Only-4.2"><span class="toc-item-num">4.2 </span>Secondary Colors Only</a></span></li><li><span><a href="#Primary-Colors-Only" data-toc-modified-id="Primary-Colors-Only-4.3"><span class="toc-item-num">4.3 </span>Primary Colors Only</a></span></li></ul></li><li><span><a href="#First-grade-math-Analysis" data-toc-modified-id="First-grade-math-Analysis-5"><span class="toc-item-num">5 </span>First grade math Analysis</a></span></li><li><span><a href="#Hypothesis" data-toc-modified-id="Hypothesis-6"><span class="toc-item-num">6 </span>Hypothesis</a></span></li><li><span><a href="#Conclusions" data-toc-modified-id="Conclusions-7"><span class="toc-item-num">7 </span>Conclusions</a></span></li></ul></div>
Visualising Color Schemes in 3D
This is a decent experiment in using Ipyvolume (and Three.js) to visualize in 3D the color scheme of some corporate identity as implemented by a sample company, in this case HERE Technologies. Apart from this introduction there is not much more prose inside this notebook. This should be considered a short appetizer to see how easy it is to use Ipyvolume.
N.B.: If you see this Jupyter notebook on GitHub or some online notebook viewer you will likely miss the whole point, namely the rendered 3D plots (Ipyvolume includes warnings where this happens, saying something like: "A Jupyter widget could not be displayed because the widget state could not be found. This could happen if the kernel storing the widget is no longer available, or if the widget state was not saved in the notebook. You may be able to create the widget by running the appropriate cells. [...]")! In this case download/install Ipyvolume and download/open this notebook locally! You might also want to run a local Jupyter inside Docker.
Motivation
Investigate some mysteries with a color scheme description, namely a possible buglet or inconsistency in the description of one shade of the two primary colors, HERE Aqua and Gray. Notice in the left image below how the second row of Aqua is unnamed, while the third is named HERE Dark Aqua. And compare to the shades of Gray on the right side.
Shades of HERE Aqua and HERE Gray
<p float="left">
<img src="here_aqua_gray.png" width="100%" />
</p>
End of explanation
"""
import numpy as np
import ipyvolume as ipv
x, y, z = np.random.random((3, 1000))
ipv.quickscatter(x, y, z, size=1, marker="sphere")
"""
Explanation: To be explored further below…
Discover Ipyvolume
End of explanation
"""
import numpy as np
import ipyvolume.pylab as p3
f = p3.figure()
p3.xyzlabel('x1', 'y1', 'z1')
scale = np.linspace(0, 1, num=10)
p3.scatter(scale, scale, scale, size=3, marker="sphere")
p3.show()
"""
Explanation: From here on we use other imports allowing to specify more aspects of the plots.
End of explanation
"""
def scatter3d(points, **kwargs):
"Render a 3D scatter plot with some predefined defaults."
f = p3.figure()
p3.xyzlabel(*kwargs.get('xyzlabels', ['x1', 'y1', 'z1']))
kwargs1 = {}
if 'size' not in kwargs:
kwargs1.update(size=3)
if 'marker' not in kwargs:
kwargs1.update(marker='sphere')
kwargs1.update(kwargs)
p3.scatter(*points, **kwargs1)
p3.show()
points = [np.linspace(0, 1, num=10)] * 3
scatter3d(points)
scale = np.linspace(0, 1, num=10)
points = [scale] * 3
color = np.array(points)
scatter3d(points, color=color.T)
scale = np.linspace(0, 1, num=10)
points = [scale] * 3
color = np.array(points)
scatter3d(points, color=color.T, size=(1 - scale) * 8)
x, y, z = np.random.random((3, 1000))
points = [x, y, z]
color = np.array(points).T
scatter3d(points, color=color, size=3)
"""
Explanation: Now let's define a single function with some defaults to have a one-line callable, later.
End of explanation
"""
tuple(bytes.fromhex("aabbcc")) == (170, 187, 204)
def hex2rgb(hex_color):
"""Convert web hex color to normalized RGB tuple.
E.g. hex2rgb('#aabbcc') -> (0.6666666666666666, 0.7333333333333333, 0.8)
"""
clean_hex_color = hex_color[1:] if hex_color.startswith('#') else hex_color
r, g, b = tuple(bytes.fromhex(clean_hex_color))
return (r/255., g/255., b/255.)
hex2rgb("aabbcc") == (170/255., 187/255., 204/255.)
"""
Explanation: Convert Hex to RGB Colors
End of explanation
"""
# Define a function with some defaults to make this a one-line call.
def scatter_colors_3d(colors, **kwargs):
"Render a 3D scatter plot with defaults for a list of hex colors."
rgb = np.array([hex2rgb(val) for val in colors])
r = rgb[:, 0]
g = rgb[:, 1]
b = rgb[:, 2]
color = np.array((r, g, b)).T
kwargs1 = {}
if 'size' not in kwargs:
kwargs1.update(size=3)
if 'marker' not in kwargs:
kwargs1.update(marker='sphere')
kwargs1.update(color=color)
kwargs1.update(kwargs)
f = p3.figure()
p3.xyzlabel(*kwargs.get('xyzlabels', ['red', 'green', 'blue']))
p3.scatter(r, g, b, **kwargs1)
p3.show()
import re
import requests
def scrape_hex_colors(url):
"Return all Hex web colors 'scraped' from some webpage."
html = requests.get(url).content.decode('utf-8')
return re.findall('#[0-9a-fA-F]{6,6}', html)
# Mind the 's' in https!
url = 'https://brandlive.here.com/colors'
here_all_colors = set(scrape_hex_colors(url))
# Just to be sure we have the values even if the website should be changed later:
# here_all_colors = '''#a3ece7 #c53580 #c41c33 #48dad0 #0f1621 #b7c99d #6f83bd #7dbae4
#00afaa #3f59a7 #d35566 #673a93 #6a6d74 #52a3db #f5b086 #00908a #8d6bae #d468a0
#383c45 #b39cc9 #cdced0 #f1894a #44ca9d #fbca40 #e29abf #06b87c #a8d1ed #fab800
#e18d99 #ec610e #76e3dc #ffffff #94af6d #9b9da2 #d1f6f3 #fcdb7f #9facd3 #70943c
#82dbbd'''.split()
scatter_colors_3d(here_all_colors)
"""
Explanation: Plot Color Schemes
All 'Scraped' Colors
Before you continue, have a look at the color scheme description to know what to expect for the primary and secondary colors used in the full color scheme!
End of explanation
"""
all_cols = here_all_colors
pri_cols = here_primary_cols.values()
sec_cols = {*all_cols}.difference(pri_cols)
scatter_colors_3d(sec_cols)
"""
Explanation: Secondary Colors Only
End of explanation
"""
here_primary_cols
scatter_colors_3d(here_primary_cols.values())
"""
Explanation: Primary Colors Only
End of explanation
"""
|
dietmarw/EK5312_ElectricalMachines | Chapman/Ch9-Problem_9-09.ipynb | unlicense | %pylab notebook
"""
Explanation: Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-9
End of explanation
"""
p = 12
n_m = 600 # [r/min]
"""
Explanation: Description
How many pulses per second must be supplied to the control unit of the motor in Problem 9-8 to achieve
a rotational speed of 600 r/min?
End of explanation
"""
n_pulses = 3*p*n_m
print('''
n_pulses = {:.0f} pulses/min = {:.0f} pulses/sec
============================================'''.format(n_pulses, n_pulses/60))
"""
Explanation: SOLUTION
From Equation (9-20),
$$n_m = \frac{1}{3p}n_\text{pulses}$$
End of explanation
"""
|
oldtopos/SDCND_Project1 | P1.ipynb | mit | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.
Import Packages
End of explanation
"""
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Read in an Image
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=5):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def annotationLine( slopeAvgs, theLines, lineIndex, line_img, vertices, lastLines ):
"""
Create an annotation line using the longest segment and extending to ROI extent
"""
try:
if theLines[ lineIndex ] is None:
return
if slopeAvgs[ 2 ] == 0:
slopeAvgs[ lineIndex ] = theLines[ lineIndex ][ 1 ]
elif slopeAvgs[ 2 ] < slopeAvgs[ 3 ]:
slopeAvgs[ lineIndex ] = slopeAvgs[ lineIndex ] * ((slopeAvgs[ 2 ] - 1) / slopeAvgs[ 2 ]) + (theLines[ lineIndex ][ 1 ] / slopeAvgs[ 2 ])
else:
slopeAvgs[ lineIndex ] = slopeAvgs[ lineIndex ] * ((slopeAvgs[ 3 ] - 1) / slopeAvgs[ 3 ]) + (theLines[ lineIndex ][ 1 ] / slopeAvgs[ 3 ])
currentLine = theLines[ lineIndex ]
currentSlope = slopeAvgs[ lineIndex ]
lineb = currentLine[ 0 ][ 1 ] - currentLine[ 0 ][ 0 ] * currentSlope
x1 = (vertices[ 0 ][ 0 ][ 1 ] - lineb) / currentSlope
x2 = (vertices[ 0 ][ 1 ][ 1 ] - lineb) / currentSlope
newLine = [[(int(x1), vertices[ 0 ][ 0 ][ 1 ],int(x2),vertices[ 0 ][ 1 ][ 1 ]) ]]
draw_lines(line_img, newLine,thickness=10,color=[255, 0, 255])
lastLines[ lineIndex ] = newLine
except:
draw_lines(line_img, lastLines[ lineIndex ],thickness=10,color=[255, 0, 255])
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, vertices=None, draw_segments=False):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
#
# Find longest segements in frame
#
theSlopes = [ 0.0, 0.0 ]
theCounts = [ 0.0, 0.0 ]
theLengths = [ 0.0, 0.0 ]
theLines = [ None, None ]
for aLine in lines:
aLine = aLine[ 0 ]
lineSlope = ( (aLine[ 3 ] - aLine[ 1 ]) / (aLine[ 2 ] - aLine[ 0 ]) )
#
# Filter near horizontal segments
#
if abs( lineSlope ) < 0.05:
continue
lineLength = math.sqrt( (aLine[ 3 ] - aLine[ 1 ]) ** 2 + (aLine[ 2 ] - aLine[ 0 ]) ** 2 )
if lineSlope > 0:
theSlopes[ 0 ] += lineSlope
theCounts[ 0 ] += 1
if lineLength > theLengths[ 0 ]:
theLines[ 0 ] = [aLine,lineSlope,lineLength]
theLengths[ 0 ] = lineLength
else:
theSlopes[ 1 ] += lineSlope
theCounts[ 1 ] += 1
if lineLength > theLengths[ 1 ]:
theLines[ 1 ] = [aLine,lineSlope,lineLength]
theLengths[ 1 ] = lineLength
#
# Draw ROI
#
if 0:
newLine = [[(vertices[ 0 ][ 0 ][ 0 ], vertices[ 0 ][ 0 ][ 1 ],vertices[ 0 ][ 1 ][ 0 ],vertices[ 0 ][ 1 ][ 1 ]) ]]
draw_lines( line_img, newLine,color=[0, 0, 255] )
newLine = [[(vertices[ 0 ][ 2 ][ 0 ], vertices[ 0 ][ 2 ][ 1 ],vertices[ 0 ][ 3 ][ 0 ],vertices[ 0 ][ 3 ][ 1 ]) ]]
draw_lines( line_img, newLine,color=[0, 0, 255] )
if draw_segments:
draw_lines(line_img, lines)
#
# Take longest lines and extend to ROI
#
annotationLine( slopeAvgs, theLines, 0, line_img, vertices, lastLines )
annotationLine( slopeAvgs, theLines, 1, line_img, vertices, lastLines )
slopeAvgs[ 2 ] += 1
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
imagePath = "test_images/"
arrImages = os.listdir("test_images/")
figureIndex = 0
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
for anImage in arrImages:
# Read in and grayscale the image
image = mpimg.imread("test_images/" + anImage)
gray = grayscale(image)
# Define a kernel size and apply Gaussian smoothing
kernel_size = 5
blur_gray = gaussian_blur( gray, kernel_size )
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
# Next we'll create a masked edges image using cv2.fillPoly()
mask = np.zeros_like(edges)
ignore_mask_color = 255
# This time we are defining a four sided polygon to mask
imshape = image.shape
vertices = np.array([[( 95, imshape[0] ), ( 460, 310 ), ( 480, 310 ), ( 900, imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest( edges, vertices )
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 5 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 10 #minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
line_image = hough_lines( masked_edges, rho, theta, threshold, min_line_length, max_line_gap, vertices, True )
# Create a "color" binary image to combine with line image
color_edges = np.dstack((edges, edges, edges))
lines_edges = weighted_img( color_edges, line_image)
plt.figure(figureIndex)
plt.imshow(lines_edges)
plt.figure(figureIndex + 1)
plt.imshow(image)
figureIndex = figureIndex + 2
"""
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
def process_image(image):
gray = grayscale(image)
# Define a kernel size and apply Gaussian smoothing
kernel_size = 5
blur_gray = gaussian_blur( gray, kernel_size )
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
# Next we'll create a masked edges image using cv2.fillPoly()
mask = np.zeros_like(edges)
ignore_mask_color = 255
# This time we are defining a four sided polygon to mask
imshape = image.shape
# Regular vid ROIs
vertices = np.array([[( 125, imshape[0] ), ( 400, 340 ), ( 520, 340 ), ( 900, imshape[0])]], dtype=np.int32)
# Challenge ROI
# vertices = np.array([[( 155 + 80, imshape[0] - 60 ), ( 590, 450 ), ( 750, 450 ), ( 1200 - 60, imshape[0] - 60)]], dtype=np.int32)
masked_edges = region_of_interest( edges, vertices )
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 15 #minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
line_image = hough_lines( masked_edges, rho, theta, threshold, min_line_length, max_line_gap, vertices, False )
# Create a "color" binary image to combine with line image
color_edges = np.dstack((edges, edges, edges))
lines_edges = weighted_img( image, line_image)
return lines_edges
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
fileName = 'solidWhiteRight.mp4'
white_output = 'test_videos_output/' + fileName
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/" + fileName )
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
fileName = 'solidYellowLeft.mp4'
yellow_output = 'test_videos_output/' + fileName
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/' + fileName)
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
fileName = 'challenge.mp4'
challenge_output = 'test_videos_output/' + fileName
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
####
#### NOTE: the ROI must be changed in process_image for this to work properly
####
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/' + fileName)
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
xunilrj/sandbox | courses/MITx/MITx 6.86x Machine Learning with Python-From Linear Models to Deep Learning/SVM%20-%20PyTorch.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
X = np.array([[0,0],[2,0],[3,0],[0,2],[2,2],[5,1],[5,2],[2,4],[4,4],[5,5]])
Y = np.array([-1,-1,-1,-1,-1,1,1,1,1,1])
YColor = np.array(["red","red","red","red","red","green","green","green","green","green"])
plt.scatter(x=X[:, 0], y=X[:, 1], c=YColor)
"""
Explanation: Support Vector Machines
In this notebook, we will build a Support Vector Machine (SVM) that will find the optimal hyperplane that maximizes the margin between two toy data classes using gradient descent. An SVM is supervised machine learning algorithm which can be used for both classification or regression problems. But it's usually used for classification. Given 2 or more labeled classes of data, it acts as a discriminative classifier, formally defined by an optimal hyperplane that seperates all the classes. New examples that are then mapped into that same space can then be categorized based on on which side of the gap they fall.
Support vectors are the data points nearest to the hyperplane, the points of a data set that, if removed, would alter the position of the dividing hyperplane. Because of this, they can be considered the critical elements of a data set, they are what help us build our SVM.
A hyperplane is a linear decision surface that splits the space into two parts; a hyperplane is a binary classifier. Geometry tells us that a hyperplane is a subspace of one dimension less than its ambient space. For instance, a hyperplane of an n-dimensional space is a flat subset with dimension n − 1. By its nature, it separates the space into two half spaces.
Data
End of explanation
"""
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
class SVM(nn.Module):
def __init__(self):
super().__init__()
self.svm = nn.Linear(2, 1)
def forward(self, x):
fwd = self.svm(x)
return fwd
"""
Explanation: Support Vector Machine
We will be using PyTorch to create our SVM.
End of explanation
"""
import math
from tqdm.notebook import tqdm
learning_rate = 0.001
epochs = 100000
batch_size = 1
X = torch.FloatTensor(X)
Y = torch.FloatTensor(Y)
N = len(Y)
last_model = SVM()
last_w = None
last_b = None
for tries in tqdm(range(100)):
#print("try:", tries)
model = SVM()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
model.train()
for epoch in tqdm(range(epochs)):
perm = torch.randperm(N)
sum_loss = 0
for i in range(0, N, batch_size):
x = X[perm[i:i + batch_size]]
y = Y[perm[i:i + batch_size]]
x = Variable(x)
y = Variable(y)
optimizer.zero_grad()
output = model(x)
loss = torch.clamp(1 - output * y, min=0)
loss = torch.mean(loss) # hinge loss
loss.backward()
optimizer.step()
sum_loss += loss.item()
if(sum_loss <= 0.00):
#print("Epoch {}, Loss: {}".format(epoch, sum_loss))
last_model = model
last_w = w = model.svm.weight.detach().numpy()
last_b = b = model.svm.bias.item()
print(w, b, "Margin Width: ", 2.0/math.sqrt(w[0,0]*w[0,0]+w[0,1]*w[0,1]))
break
print("Original Model")
print(model.state_dict())
model.eval()
sum_loss = 0
for i in range(0, N):
x = X[i,:]
y = Y[i]
output = model(x)
loss = torch.clamp(1 - output * y, min=0)
#loss = torch.mean(loss)
sum_loss += loss.item()
print("Hinge Loss: ", sum_loss)
print("Halved Model")
model.state_dict()["svm.weight"][:] = model.svm.weight / 2
model.state_dict()["svm.bias"][:] = model.svm.bias / 2
print(model.state_dict())
model.eval()
sum_loss = 0
for i in range(0, N):
x = X[i,:]
y = Y[i]
output = model(x)
loss = torch.clamp(1 - output * y, min=0)
#loss = torch.mean(loss)
sum_loss += loss.item()
print("Hinge Loss: ", sum_loss)
model.state_dict()["svm.weight"][:] = model.svm.weight * 2
model.state_dict()["svm.bias"][:] = model.svm.bias * 2
def plotline(a, b, c, offset, color):
x = np.linspace(-1, 6, 2)
y = (offset -a * x - c) / b
plt.plot(x, y, c=color)
plt.scatter(x=X[:, 0], y=X[:, 1], c=YColor)
print(last_w, last_b)
plotline(last_w[0,0], last_w[0,1], last_b, 0, "black")
plotline(last_w[0,0], last_w[0,1], last_b, 1, "green")
plotline(last_w[0,0], last_w[0,1], last_b,-1, "red")
"""
Explanation: Our Support Vector Machine (SVM) is a subclass of the nn.Module class and to initialize our SVM, we call the base class' init function. Our Forward function applies a linear transformation to the incoming data: y = Ax + b.
Training
Now let's go ahead and train our SVM.
End of explanation
"""
|
jcharit1/Amazon-Fine-Foods-Reviews | code/.ipynb_checkpoints/experimental-checkpoint (jimmy-Precision-T1600's conflicted copy 2017-05-14).ipynb | mit | import os
import pandas as pd
import numpy as np
import scipy as sp
import seaborn as sns
import matplotlib.pyplot as plt
import json
from IPython.display import Image
from IPython.core.display import HTML
import tensorflow as tf
retval=os.chdir("..")
clean_data=pd.read_pickle('./clean_data/clean_data.pkl')
clean_data.head()
kept_cols=['helpful']
kept_cols.extend(clean_data.columns[9:])
"""
Explanation: Experimental Model Building
Code for building the models
Author: Jimmy Charité
Email: jimmy.charite@gmail.com
Experimenting with tensorflow
End of explanation
"""
my_rand_state=0
test_size=0.25
from sklearn.model_selection import train_test_split
X = (clean_data[kept_cols].iloc[:,1:]).as_matrix()
y = (clean_data[kept_cols].iloc[:,0]).tolist()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size,
random_state=my_rand_state)
"""
Explanation: Training and Testing Split
End of explanation
"""
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=len(X[0,:]))]
dnn_clf=tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[200,100,50],
model_dir='./other_output/tf_model')
from sklearn.preprocessing import StandardScaler
std_scale=StandardScaler()
class PassData(object):
'''
Callable object that can be initialized and
used to pass data to tensorflow
'''
def __init__(self,X,y):
self.X=X
self.y=y
def scale(self):
self.X = std_scale.fit_transform(self.X, self.y)
def __call__(self):
return tf.constant(X), tf.constant(y)
train_data=PassData(X,y)
train_data.scale()
dnn_clf.fit(input_fn=train_data,steps=1000)
"""
Explanation: Setting Up Tensor Flow
End of explanation
"""
from sklearn.metrics import roc_curve, auc
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_b.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_b.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_b.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_b.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Basic Models Using BOW & Macro-Text Stats')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_Basic_BOW_MERGED.png', bbox_inches='tight')
plt.show()
"""
Explanation: Testing Estimators
End of explanation
"""
|
Iwan-Zotow/VV | C25_GP3.ipynb | mit | import math
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import BEAMphsf
import text_loader
import H1Dn
import H1Du
import ListTable
%matplotlib inline
"""
Explanation: Validation and Verification of the 25mm collimator simulation, GP3
Here we provide code and output which verifies and validates the 25mm collimator simulation. We're using simulation phase space file output and input to check the validity of the result. This is 25 sources machine, so length of the source is increased to 18mm, source is moved forward by 3mm and activity should be 180Ci.
End of explanation
"""
C = 25
phsfname = "PHSF" + "." + str(C)
phsfname = "../" + phsfname
print ("We're reading the {1}mm phase space file = {0}".format(phsfname, C))
"""
Explanation: First, set filename to what we want to examine and read PhSF header
End of explanation
"""
events, nof_photons, nof_electrons, nof_positrons = text_loader.load_events(phsfname, -1)
print("Number of loaded events: {0}".format(len(events)))
print("Number of loaded photons: {0}".format(nof_photons))
print("Number of loaded electrons: {0}".format(nof_electrons))
print("Number of loaded positrons: {0}".format(nof_positrons))
print("Yield: {0}".format(nof_photons/40000000000.0))
"""
Explanation: Checking PhSF header parameters
End of explanation
"""
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 5
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
"""
Explanation: Energy Spectrum tests
We expect energy spectrum to be scattering background together with peaks δ(E-1.17) and δ(E-1.33). Below we'trying to prove this statement. We will draw the distributions and histograms to estimate influence of the background scattering and get the data about δ-peaks
We're filling energy histogram now, basic checks
We're building scale with 5 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 5
End of explanation
"""
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_5 = Y[-2]
E117_5 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_5))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_5))
"""
Explanation: Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 5 bins between 1.33 peak and 1.17 peak.
End of explanation
"""
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 10
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
"""
Explanation: Filling energy histogram with double number of bins
We're building scale with 10 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 10
End of explanation
"""
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_10 = Y[-2]
E117_10 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_10))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_10))
"""
Explanation: Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak.
End of explanation
"""
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 20
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
"""
Explanation: Filling energy histogram with quadruple number of bins
We're building scale with 20 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 20.
End of explanation
"""
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_20 = Y[-2]
E117_20 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_20))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_20))
"""
Explanation: Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak.
End of explanation
"""
table = ListTable.ListTable()
table.append(["Nbins", "E=1.17", "E=1.33"])
table.append(["", "MeV", "MeV"])
table.append([5, 1.0, 1.0])
table.append([10, E133_10/E133_5, E133_10/E133_5])
table.append([20, E133_20/E133_5, E133_20/E133_5])
table
"""
Explanation: Comparing peak values
We would compare peak values at 10 bins and at 5 bins. The presence of δ-peaks means that with doubling number of bins we shall expect the roughly doubling the peak values.
End of explanation
"""
Znow = 197.5 # we at 200mm at the cooolimator exit
Zshot = 380.0 # shot isocenter is at 380mm
# radial, X and Y, all units in mm
hr = H1Du.H1Du(120, 0.0, 40.0)
hx = H1Du.H1Du(128, -32.0, 32.0)
hy = H1Du.H1Du(128, -32.0, 32.0)
for e in events:
WT = e[0]
xx, yy, zz = BEAMphsf.move_event(e, Znow, Zshot)
#xx = e[2]
#yy = e[3]
#zz = e[4]
r = math.sqrt(xx*xx + yy*yy)
hr.fill(r, WT)
hx.fill(xx, WT)
hy.fill(yy, WT)
print("Number of events in R histogram: {0}".format(hr.nof_events()))
print("Integral in R histogram: {0}".format(hr.integral()))
print("Underflow bin: {0}".format(hr.underflow()))
print("Overflow bin: {0}\n".format(hr.overflow()))
print("Number of events in X histogram: {0}".format(hx.nof_events()))
print("Integral in X histogram: {0}".format(hx.integral()))
print("Underflow bin: {0}".format(hx.underflow()))
print("Overflow bin: {0}\n".format(hx.overflow()))
print("Number of events in Y histogram: {0}".format(hy.nof_events()))
print("Integral in Y histogram: {0}".format(hy.integral()))
print("Underflow bin: {0}".format(hy.underflow()))
print("Overflow bin: {0}".format(hy.overflow()))
X = []
Y = []
W = []
norm = 1.0/hr.integral()
sum = 0.0
st = hr.step()
for k in range (0, hr.size()+1):
r_lo = hr.lo() + float(k) * st
r_hi = r_lo + st
r = 0.5*(r_lo + r_hi)
ba = math.pi * (r_hi*r_hi - r_lo*r_lo) # bin area
d = hr[k] # data from bin with index k
y = d[0] / ba # first part of bin is collected weights
y = y * norm
X.append(r)
Y.append(y)
W.append(st)
sum += y * ba
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, 0.0, color='b')
plt.xlabel('Radius(mm)')
plt.ylabel('PDF of the photons')
plt.title('Radial distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
"""
Explanation: The result is as expected. Only few percent of the values in the 1.33 and 1.17 MeV bins are due to scattered radiation. Most values are coming from primary source and are δ-peaks in energy.
Spatial Distribution tests
Here we will plot spatial distribution of the particles, projected from collimator exit position to the isocenter location at 38cm
End of explanation
"""
X = []
Y = []
W = []
norm = 1.0/hx.integral()
sum = 0.0
st = hx.step()
for k in range (0, hx.size()):
x_lo = hx.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = hx[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='b')
plt.xlabel('X(mm)')
plt.ylabel('PDF of the photons')
plt.title('X distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/hy.integral()
sum = 0.0
st = hy.step()
for k in range (0, hy.size()):
x_lo = hy.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = hy[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='b')
plt.xlabel('Y(mm)')
plt.ylabel('PDF of the photons')
plt.title('Y distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
"""
Explanation: NB: peak at the far right above 40mm is overflow bin
End of explanation
"""
# angular, WZ, WX and WY, all units in radians
h_wz = H1Du.H1Du(100, 1.0 - 0.05, 1.0)
h_wx = H1Du.H1Du(110, -0.055, 0.055)
h_wy = H1Du.H1Du(110, -0.055, 0.055)
for e in events:
WT = e[0]
wx = e[5]
wy = e[6]
wz = e[7]
h_wz.fill(wz, WT)
h_wx.fill(wx, WT)
h_wy.fill(wy, WT)
print("Number of events in WZ histogram: {0}".format(h_wz.nof_events()))
print("Integral in WZ histogram: {0}".format(h_wz.integral()))
print("Underflow bin: {0}".format(h_wz.underflow()))
print("Overflow bin: {0}\n".format(h_wz.overflow()))
print("Number of events in WX histogram: {0}".format(h_wx.nof_events()))
print("Integral in WX histogram: {0}".format(h_wx.integral()))
print("Underflow bin: {0}".format(h_wx.underflow()))
print("Overflow bin: {0}\n".format(h_wx.overflow()))
print("Number of events in WY histogram: {0}".format(h_wy.nof_events()))
print("Integral in WY histogram: {0}".format(h_wy.integral()))
print("Underflow bin: {0}".format(h_wy.underflow()))
print("Overflow bin: {0}".format(h_wy.overflow()))
X = []
Y = []
W = []
norm = 1.0/h_wz.integral()
sum = 0.0
st = h_wz.step()
for k in range (0, h_wz.size()+1):
x_lo = h_wz.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wz[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WZ')
plt.ylabel('PDF of the photons')
plt.title('Angular Z distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/h_wx.integral()
sum = 0.0
st = h_wx.step()
for k in range (0, h_wx.size()):
x_lo = h_wx.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wx[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WX')
plt.ylabel('PDF of the photons')
plt.title('Angular X distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/h_wy.integral()
sum = 0.0
st = h_wy.step()
for k in range (0, h_wy.size()):
x_lo = h_wy.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wy[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WY')
plt.ylabel('PDF of the photons')
plt.title('Angular Y distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
"""
Explanation: We find spatial distribution to be consistent with the collimation setup
Angular Distribution tests
Here we plot particles angular distribution for all three directional cosines, at the collimator exit. We expect angular distribution to fill collimation angle which is close to 0.033 radians (0.5x25/380).
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/tfx/tutorials/tfx/penguin_tfma.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
try:
import colab
!pip install --upgrade pip
except:
pass
"""
Explanation: Model analysis using TFX Pipeline and TensorFlow Model Analysis
Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab".
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/penguin_tfma">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/penguin_tfma.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/penguin_tfma.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/penguin_tfma.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</table></div>
In this notebook-based tutorial, we will create and run a TFX pipeline
which creates a simple classification model and analyzes its performance
across multiple runs. This notebook is based on the TFX pipeline we built in
Simple TFX Pipeline Tutorial.
If you have not read that tutorial yet, you should read it before proceeding
with this notebook.
As you tweak your model or train it with a new dataset, you need to check
whether your model has improved or become worse. Just checking top-level
metrics like accuracy might not be enough. Every trained model should be
evaluated before it is pushed to production.
We will add an Evaluator component to the pipeline created in the previous
tutorial. The Evaluator component performs deep analysis for your models and
compare the new model against a baseline to determine they are "good enough".
It is implemented using the
TensorFlow Model Analysis library.
Please see
Understanding TFX Pipelines
to learn more about various concepts in TFX.
Set Up
The Set up process is the same as the previous tutorial.
We first need to install the TFX Python package and download
the dataset which we will use for our model.
Upgrade Pip
To avoid upgrading Pip in a system when running locally,
check to make sure that we are running in Colab.
Local systems can of course be upgraded separately.
End of explanation
"""
!pip install -U tfx
"""
Explanation: Install TFX
End of explanation
"""
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
"""
Explanation: Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
Check the TensorFlow and TFX versions.
End of explanation
"""
import os
PIPELINE_NAME = "penguin-tfma"
# Output directory to store artifacts generated from the pipeline.
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
"""
Explanation: Set up variables
There are some variables used to define a pipeline. You can customize these
variables as you want. By default all output from the pipeline will be
generated under the current directory.
End of explanation
"""
import urllib.request
import tempfile
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
"""
Explanation: Prepare example data
We will use the same
Palmer Penguins dataset.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
Because TFX ExampleGen reads inputs from a directory, we need to create a
directory and copy dataset to it.
End of explanation
"""
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx.components.trainer.executor import TrainerFnArgs
from tfx.components.trainer.fn_args_utils import DataAccessor
from tfx_bsl.tfxio import dataset_options
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: DataAccessor,
schema: schema_pb2.Schema,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
dataset_options.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _build_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
"""
Explanation: Create a pipeline
We will add an Evaluator
component to the pipeline we created in the
Simple TFX Pipeline Tutorial.
An Evaluator component requires input data from an ExampleGen component and
a model from a Trainer component and a
tfma.EvalConfig
object. We can optionally supply a baseline model which can be used to compare
metrics with the newly trained model.
An evaluator creates two kinds of output artifacts, ModelEvaluation and
ModelBlessing. ModelEvaluation contains the detailed evaluation result which
can be investigated and visualized further with TFMA library. ModelBlessing
contains a boolean result whether the model passed given criteria and can be
used in later components like a Pusher as a signal.
Write model training code
We will use the same model code as in the
Simple TFX Pipeline Tutorial.
End of explanation
"""
import tensorflow_model_analysis as tfma
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Creates a three component penguin pipeline with TFX."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# NEW: Get the latest blessed model for Evaluator.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
# NEW: Uses TFMA to compute evaluation statistics over features of a model and
# perform quality validation of a candidate model (compared to a baseline).
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key='species')],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Calculate metrics for each penguin species.
tfma.SlicingSpec(feature_keys=['species']),
],
metrics_specs=[
tfma.MetricsSpec(per_slice_thresholds={
'sparse_categorical_accuracy':
tfma.PerSliceMetricThresholds(thresholds=[
tfma.PerSliceMetricThreshold(
slicing_specs=[tfma.SlicingSpec()],
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
)]),
})],
)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
# Checks whether the model passed the validation steps and pushes the model
# to a file destination if check passed.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'], # Pass an evaluation result.
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
trainer,
# Following two components were added to the pipeline.
model_resolver,
evaluator,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
"""
Explanation: Write a pipeline definition
We will define a function to create a TFX pipeline. In addition to the
Evaluator component we mentioned above, we will add one more node called
Resolver.
To check a new model is getting better than previous model, we need to compare
it against a previous published model, called baseline.
ML Metadata(MLMD) tracks all
previous artifacts of the pipeline and Resolver can find what was the latest
blessed model -- a model passed Evaluator successfully -- from MLMD using a
strategy class called LatestBlessedModelStrategy.
End of explanation
"""
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=_trainer_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
"""
Explanation: We need to supply the following information to the Evaluator via eval_config:
- Additional metrics to configure (if want more metrics than defined in model).
- Slices to configure
- Model validations thresholds to verify if validation to be included
Because SparseCategoricalAccuracy was already included in the
model.compile() call, it will be included in the analysis automatically. So
we do not add any additional metrics here. SparseCategoricalAccuracy will be
used to decide whether the model is good enough, too.
We compute the metrics for the whole dataset and for each penguin species.
SlicingSpec specifies how we aggregate the declared metrics.
There are two thresholds that a new model should pass, one is an absolute
threshold of 0.6 and the other is a relative threshold that it should
be higher than the baseline model. When you run the pipeline for the first
time, the change_threshold will be ignored and only the value_threshold will
be checked. If you run the pipeline more than once, the Resolver will find a
model from the previous run and it will be used as a baseline model for the
comparison.
See Evaluator component guide for more information.
Run the pipeline
We will use LocalDagRunner as in the previous tutorial.
End of explanation
"""
from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib
# TODO(b/171447278): Move these functions into the TFX library.
def get_latest_artifacts(metadata, pipeline_name, component_id):
"""Output artifacts of the latest run of the component."""
context = metadata.store.get_context_by_type_and_name(
'node', f'{pipeline_name}.{component_id}')
executions = metadata.store.get_executions_by_context(context.id)
latest_execution = max(executions,
key=lambda e:e.last_update_time_since_epoch)
return execution_lib.get_artifacts_dict(metadata, latest_execution.id,
[metadata_store_pb2.Event.OUTPUT])
"""
Explanation: When the pipeline completed, you should be able to see something like following:
INFO:absl:Blessing result True written to pipelines/penguin-tfma/Evaluator/blessing/4.
Or you can also check manually the output directory where the generated
artifacts are stored. If you visit
pipelines/penguin-tfma/Evaluator/blessing/ with a file broswer, you can see a
file with a name BLESSED or NOT_BLESSED according to the evaluation result.
If the blessing result is False, Pusher will refuse to push the model to the
serving_model_dir, because the model is not good enough to be used in
production.
You can run the pipeline again possibly with different evaluation configs. Even
if you run the pipeline with the exact same config and dataset, the trained
model might be slightly different due to the inherent randomness of the model
training which can lead to a NOT_BLESSED model.
Examine outputs of the pipeline
You can use TFMA to investigate and visualize the evaluation result in
ModelEvaluation artifact.
NOTE: If you are not on Colab, Install Jupyter Extensions.
You need an TensorFlow Model Analysis extension to see the visualization from
TFMA. This extension is already installed on Google Colab, but you might need
to install it if you are running this notebook on other environments.
See installation direction of Jupyter extension in the
Install guide.
Get analysis result from output artifacts
You can use MLMD APIs to locate these outputs programatically. First, we will
define some utility functions to search for the output artifacts that were just
produced.
End of explanation
"""
# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
# Find output artifacts from MLMD.
evaluator_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
'Evaluator')
eval_artifact = evaluator_output[standard_component_specs.EVALUATION_KEY][0]
"""
Explanation: We can find the latest execution of the Evaluator component and get output
artifacts of it.
End of explanation
"""
import tensorflow_model_analysis as tfma
eval_result = tfma.load_eval_result(eval_artifact.uri)
tfma.view.render_slicing_metrics(eval_result, slicing_column='species')
"""
Explanation: Evaluator always returns one evaluation artifact, and we can visualize it
using TensorFlow Model Analysis library. For example, following code will
render the accuracy metrics for each penguin species.
End of explanation
"""
|
datacommonsorg/api-python | notebooks/intro_data_science/Regression_Basics_and_Prediction.ipynb | apache-2.0 | # Setup/Imports
!pip install datacommons --upgrade --quiet
!pip install datacommons_pandas --upgrade --quiet
# Data Commons Python and Pandas APIs
import datacommons
import datacommons_pandas
# For manipulating data
import numpy as np
import pandas as pd
# For implementing models and evaluation methods
from sklearn import linear_model
from sklearn.metrics import r2_score, mean_squared_error
# For plotting/printing
from matplotlib import pyplot as plt
import seaborn as sns
"""
Explanation: <a href="https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/intro_data_science/Regression_Basics_and_Prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2022 Google LLC.
SPDX-License-Identifier: Apache-2.0
Regression: Basics and Prediction
Regression analysis is a powerful process for finding statistical relationships between variables. It's one of the most commonly used tools seen in the data science world, often used for prediction and forecasting.
In this assignment, we'll be focusing on linear regression, which forms the basis for most regression models. In particular, we'll explore linear regression as a tool for prediction. We'll cover interpreting regression models, in part 2.
Learning Objectives:
Linear regression for prediction
Mean-Squared error
In-sample vs out-of-sample prediction
Single variate vs. multivariate Regression
The effect of increasing variables
Need extra help?
If you're new to Google Colab, take a look at this getting started tutorial.
To build more familiarity with the Data Commons API, check out these Data Commons Tutorials.
And for help with Pandas and manipulating data frames, take a look at the Pandas Documentation.
We'll be using the scikit-learn library for implementing our models today. Documentation can be found here.
As usual, if you have any other questions, please reach out to your course staff!
Part 0: Getting Set Up
Run the following code boxes to load the python libraries and data we'll be using today.
End of explanation
"""
# Load the data we'll be using
city_dcids = datacommons.get_property_values(["CDC500_City"],
"member",
limit=500)["CDC500_City"]
# We've compiled a list of some nice Data Commons Statistical Variables
# to use as features for you
stat_vars_to_query = [
"Count_Person",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Percent_Person_Obesity"
]
# Query Data Commons for the data and remove any NaN values
raw_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,stat_vars_to_query)
raw_features_df.dropna(inplace=True)
# order columns alphabetically
raw_features_df = raw_features_df.reindex(sorted(raw_features_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
df = raw_features_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
df.insert(0, 'City Name', pd.Series(city_name_dict))
# Display results
display(df)
"""
Explanation: Introduction
In this assignment, we'll be returning to the scenario we started analyzing in the Model Evaluation assignment -- analyzing the obesity epidemic in the United States. Obesity rates vary across the nation by geographic location. In this colab, we'll be exploring how obesity rates vary with different health or societal factors across US cities.
In the Model Evaluation assignment, we limited our analysis to high (<30%) and low (>30%) categories. Today we'll go one step further and predict the obesity rates themselves.
Our data science question: Can we predict the obesity rates of various US Cities based on other health or lifestyle factors?
Run the following code box to load the data. We've done some basic data cleaning and manipulation for you, but look through the code to make sure you understand what's going on.
End of explanation
"""
var1 = "Count_Person"
var2 = "Percent_Person_PhysicalInactivity"
dep_var = "Percent_Person_Obesity"
df_single_vars = df[[var1, var2, dep_var]].copy()
x_a = df_single_vars[var1].to_numpy().reshape(-1, 1)
x_b = df_single_vars[var2].to_numpy().reshape(-1,1)
y = df_single_vars[dep_var].to_numpy().reshape(-1, 1)
# Fit models
model_a = linear_model.LinearRegression().fit(x_a,y)
model_b = linear_model.LinearRegression().fit(x_b,y)
# Make Predictions
predictions_a = model_a.predict(x_a)
predictions_b = model_b.predict(x_b)
df_single_vars["Prediction_A"] = predictions_a
df_single_vars["Prediction_B"] = predictions_b
# Plot Model A
print("Model A:")
print("---------")
print("Weights:", model_a.coef_)
print("Intercept:", model_a.intercept_)
fig, ax = plt.subplots()
p1 = sns.scatterplot(data=df_single_vars, x=var1, y=dep_var, ax=ax, color="orange")
p2 = sns.lineplot(data=df_single_vars, x=var1, y="Prediction_A", ax=ax, color="blue")
plt.show()
# Plot Model B
print("Model B:")
print("---------")
print("Weights:", model_b.coef_)
print("Intercept:", model_b.intercept_)
fig, ax = plt.subplots()
p1 = sns.scatterplot(data=df_single_vars, x=var2, y=dep_var, ax=ax, color="orange")
p2 = sns.lineplot(data=df_single_vars, x=var2, y="Prediction_B", ax=ax, color="blue")
plt.show()
"""
Explanation: Questions
Q0A) Look through the dataframe and make sure you understand what each variable is describing.
Q0B) What are the units for each variable?
Part 1: Single Linear Regression
To help us build some intuition for how regression works, we'll start by using a single variable. We'll create two models, Model A and Model B. Model A will use Count_Person variable to predict obesity rates, while Model B will use the Percent_Person_PhysicalInactivity variable.
Model | Independent Variable | Dependent Variable
--- | --- | ---
Model A | Count_Person | Percent_Person_Obesity
Model B | Percent_Person_PhysicalInactivity | Percent_Person_Obesity
Q1A) Just using your intuition, which model do you think will be better at predicting obesity rates? Why?
Fit a Model
Let's now check your intuition by fitting linear regression models to our data.
Run the following code box to fit Model A and Model B.
End of explanation
"""
print("Model A RMSE:", mean_squared_error(y, predictions_a, squared=False))
print("Model B RMSE:", mean_squared_error(y, predictions_b, squared=False))
"""
Explanation: Q1B) For each model, what are the units of the weights? Intercepts?
Q1C) Mathematically, what does the weight represent?
Q1D) Mathematically, what does the intercept represent?
Q1E) Looking visually at the plots of the regression models, which model do you think will be better at predicting obesity rates for new, unseen data points (cities)? Why?
Prediction Error: MSE and RMSE
To quantify predictive accuracy, we find the prediction error, a metric of how far off our model predictions are from the true value. One of the most common metrics used is mean squared error:
$$ MSE = \frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2$$
MSE is a measure of the average difference between the predicted value and the actual value. The square ($^2$) can seem counterintuitive at first, but offers some nice mathematical properties.
There's also root mean squared error, the square root of the MSE, which scales the error metric to match the scale of the data points:
$$ RMSE = \sqrt{MSE} = \sqrt{\frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2}$$
Prediction error can actually refer to one of two things: in-sample prediction error or out-of-sample prediction error. We'll explore both in the sections below.
In-Sample Prediction
In-sample prediction refers to forecasting or predicting for a data point that was used to fit the model. This is akin to applying your model to the training set, in machine learning parlance. In-sample prediction error measures how well our model is able to reproduce the data we currently have.
Run the following code block to calculate the in-sample prediction RMSE for both models.
End of explanation
"""
# make a prediction
new_dcids = [
"geoId/0617610", # Cupertino, CA
"geoId/0236400", # Juneau, AK
"geoId/2467675", # Rockville, MD
"geoId/4530850", # Greenville, SC
"geoId/3103950", # Bellevue, NE
]
new_df = datacommons_pandas.build_multivariate_dataframe(new_dcids,stat_vars_to_query)
x_a_new = new_df[var1].to_numpy().reshape(-1,1)
x_b_new = new_df[var2].to_numpy().reshape(-1,1)
y_new = new_df[dep_var].to_numpy().reshape(-1, 1)
predicted_a_new = model_a.predict(x_a_new)
predicted_b_new = model_b.predict(x_b_new)
new_df["Model A Prediction"] = predicted_a_new
new_df["Model B Prediction"] = predicted_b_new
print("Model A:")
print("--------")
display(new_df[["Model A Prediction", dep_var]])
fig, ax = plt.subplots()
p0 = sns.scatterplot(data=df_single_vars, x=var1, y=dep_var, ax=ax, color="orange")
p1 = sns.scatterplot(data=new_df, x=var1, y=dep_var, ax=ax, color="red")
p2 = sns.lineplot(data=df_single_vars, x=var1, y="Prediction_A", ax=ax, color="blue")
plt.show()
print("RMSE:", mean_squared_error(y_new, predicted_a_new, squared=False))
print("")
print("Model B:")
print("--------")
display(new_df[["Model B Prediction", dep_var]])
fig, ax = plt.subplots()
p0 = sns.scatterplot(data=df_single_vars, x=var2, y=dep_var, ax=ax, color="orange")
p1 = sns.scatterplot(data=new_df, x=var2, y=dep_var, ax=ax, color="red")
p2 = sns.lineplot(data=df_single_vars, x=var2, y="Prediction_B", ax=ax, color="blue")
plt.show()
print("RMSE:", mean_squared_error(y_new, predicted_b_new, squared=False))
print("")
"""
Explanation: Q1F) What are the units of the RMSE for each model?
Q1G) Which model had better RMSE?
Out-Of-Sample Prediction
In contrast to in-sample prediction, we can also perform out-of-sample prediction, by using our model to make predictions on new, previously unseen data. This is akin to applying our model on a test set, in machine learning parlance. Out-of-sample prediction error measures how well our model can generalize to new data.
Q1H) In general, would you expect in-sample prediction error, or out-of-sample prediction error to be higher?
Let's see how well our models perform on some US cities not included in the CDC500.
End of explanation
"""
display(df)
"""
Explanation: Q1I) How well did these model predict the obesity rates? Which model had better accuracy?
Q1J) For the model you selected in the question above, how much would you trust this model? What are its limitations?
Q1K)Can you think of any ways to create an even better model?
Part 2: Multiple Linear Regression
Let's now see what happens if we increase the number of independent variables used to make our prediction. Using multiple independent variables is referred to as multiple linear regression.
Now let's use all the data we loaded at the beginning of the assignment. The following code box will display our dataframe in its entirety again, so you can refamiliarize yourself with the data we have available.
End of explanation
"""
# fit a regression model
dep_var = "Percent_Person_Obesity"
y = df[dep_var].to_numpy().reshape(-1, 1)
x = df.loc[:, ~df.columns.isin([dep_var, "City Name"])]
model = linear_model.LinearRegression().fit(x,y)
predictions = model.predict(x)
df["Predicted"] = predictions
print("Features in Order:\n\n\t", x.columns)
print("\nWeights:\n\n\t", model.coef_)
print("\nIntercept:\n\n\t", model.intercept_)
"""
Explanation: Fit a Model
Now let's fit a linear regression model using all of the features in our dataframe.
End of explanation
"""
# Analyze in sample MSE
print("In-sample Prediction RMSE:", mean_squared_error(y, predictions, squared=False))
"""
Explanation: Q2A) Look at the coefficients for each of the features. Which features contribute most to the prediction?
Prediction Error
Let's now analyze the in-sample and out-of-sample prediction error for our multiple linear regression model.
End of explanation
"""
# Apply model to some out-of-sample datapoints
new_dcids = [
"geoId/0617610", # Cupertino, CA
"geoId/0236400", # Juneau, AK
"geoId/2467675", # Rockville, MD
"geoId/4530850", # Greenville, SC
"geoId/3103950", # Bellevue, NE
]
new_df = datacommons_pandas.build_multivariate_dataframe(new_dcids,stat_vars_to_query)
# sort columns alphabetically
new_df = new_df.reindex(sorted(new_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
new_df = new_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(new_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
new_df.insert(0, 'City Name', pd.Series(city_name_dict))
display(new_df)
new_y = new_df[dep_var].to_numpy().reshape(-1, 1)
new_x = new_df.loc[:, ~new_df.columns.isin([dep_var, "City Name"])]
predicted = model.predict(new_x)
new_df["Prediction"] = predicted
display(new_df[["Prediction", dep_var]])
print("Out-of-sample RMSE:", mean_squared_error(new_y, predicted, squared=False))
"""
Explanation: Q2B) How does the in-sample prediction RMSE compare with that of the single variable models A and B?
We'll also take a look at out-of-sample prediction error.
End of explanation
"""
# Load new data
new_stat_vars = [
'Percent_Person_Obesity',
'Count_Household',
'Count_HousingUnit',
'Count_Person',
'Count_Person_1OrMoreYears_DifferentHouse1YearAgo',
'Count_Person_BelowPovertyLevelInThePast12Months',
'Count_Person_EducationalAttainmentRegularHighSchoolDiploma',
'Count_Person_Employed',
'GenderIncomeInequality_Person_15OrMoreYears_WithIncome',
'Median_Age_Person',
'Median_Income_Household',
'Median_Income_Person',
'Percent_Person_PhysicalInactivity',
'Percent_Person_SleepLessThan7Hours',
'Percent_Person_WithHighBloodPressure',
'Percent_Person_WithHighCholesterol',
'Percent_Person_WithMentalHealthNotGood',
'UnemploymentRate_Person'
]
# Query Data Commons for the data and remove any NaN values
large_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,new_stat_vars)
large_features_df.dropna(axis='index', inplace=True)
# order columns alphabetically
large_features_df = large_features_df.reindex(sorted(large_features_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
large_df = large_features_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
large_df.insert(0, 'City Name', pd.Series(city_name_dict))
# Display results
display(large_df)
"""
Explanation: Q2B) How does the out-of-sample RMSE compare with that of the single variable models A and B?
Q2C) In general, how would you expect adding more variables to affect the resulting prediction error: increase, decrease, or no substantial change?
Variables: The More, the Merrier?
As we've seen in the sections above, adding more variables to our regression model tends to increase model accuracy. But is adding more and more variables always a good thing?
Let's explore what happens when we add even more variables. We've compiled a new list of statistical variables to predict obesity rates with. Run the code boxes below to load some more data.
End of explanation
"""
# Build a new model
dep_var = "Percent_Person_Obesity"
y = large_df[dep_var].to_numpy().reshape(-1, 1)
x = large_df.loc[:, ~large_df.columns.isin([dep_var, "City Name"])]
large_model = linear_model.LinearRegression().fit(x,y)
predictions = large_model.predict(x)
large_df["Predicted"] = predictions
# Get out-of-sample RMSE
print("Features in Order:\n\n\t", x.columns)
print("\nWeights:\n\n\t", model.coef_)
print("\nIntercept:\n\n\t", model.intercept_)
"""
Explanation: Q2D) Take a look at the list of variables we'll be using this time. Do you think all of them will be useful/predictive?
Q2E) Based on your intuition, do you think adding all these models will help or hinder predictive accuracy?
Let's now build a model and see what happens.
End of explanation
"""
# Apply model to some out-of-sample datapoints
new_dcids = [
"geoId/0617610", # Cupertino, CA
"geoId/0236400", # Juneau, AK
"geoId/2467675", # Rockville, MD
"geoId/4530850", # Greenville, SC
"geoId/3103950", # Bellevue, NE
]
new_df = datacommons_pandas.build_multivariate_dataframe(new_dcids,new_stat_vars)
new_df.dropna()
# sort columns alphabetically
new_df = new_df.reindex(sorted(new_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
new_df = new_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(new_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
new_df.insert(0, 'City Name', pd.Series(city_name_dict))
display(new_df)
new_y = new_df[dep_var].to_numpy().reshape(-1, 1)
new_x = new_df.loc[:, ~new_df.columns.isin([dep_var, "City Name"])]
new_predicted = large_model.predict(new_x)
new_df["Prediction"] = new_predicted
display(new_df[["Prediction", dep_var]])
print("In-sample Prediction RMSE:", mean_squared_error(y, predictions, squared=False))
print("Out-of-sample RMSE:", mean_squared_error(new_y, new_predicted, squared=False))
"""
Explanation: Let's also look at prediction error:
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Regression/Air-Quality-Prediction.ipynb | mit | import os
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
from time import time
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split, cross_val_score
from sklearn import cross_validation, metrics
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report, mean_squared_error, r2_score
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import *
from sklearn import metrics
# read .csv from provided dataset
csv_filename="AirQualityUCI.csv"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename, sep=";" , parse_dates= ['Date','Time'])
"""
Explanation: Air Quality Prediction
In this example we would look at the task of predicting air quality. We would use the following dataset:
https://archive.ics.uci.edu/ml/datasets/Air+Quality
This dataset contains the responses of a gas multisensor device deployed on the field in an Italian city. Hourly responses averages are recorded along with gas concentrations references from a certified analyzer. The dataset contains 9358 instances of hourly averaged responses from an array of 5 metal oxide chemical sensors embedded in an Air Quality Chemical Multisensor Device. The device was located on the field in a significantly polluted area, at road level, within an Italian city. Data was recorded from March 2004 to February 2005 (one year) representing the longest freely available recordings of on field deployed air quality chemical sensor devices responses. Ground Truth hourly averaged concentrations for CO, Non Metanic Hydrocarbons, Benzene, Total Nitrogen Oxides (NOx) and Nitrogen Dioxide (NO2) are provided by a co-located reference certified analyzer.
Attribute Information:
Date (DD/MM/YYYY)
Time (HH.MM.SS)
True hourly averaged concentration CO in mg/m^3 (reference analyzer)
PT08.S1 (tin oxide) hourly averaged sensor response (nominally CO targeted)
True hourly averaged overall Non Metanic HydroCarbons concentration in microg/m^3 (reference analyzer)
True hourly averaged Benzene concentration in microg/m^3 (reference analyzer)
PT08.S2 (titania) hourly averaged sensor response (nominally NMHC targeted)
True hourly averaged NOx concentration in ppb (reference analyzer)
PT08.S3 (tungsten oxide) hourly averaged sensor response (nominally NOx targeted)
True hourly averaged NO2 concentration in microg/m^3 (reference analyzer)
PT08.S4 (tungsten oxide) hourly averaged sensor response (nominally NO2 targeted)
PT08.S5 (indium oxide) hourly averaged sensor response (nominally O3 targeted)
Temperature in °C
Relative Humidity (%)
AH Absolute Humidity
Download the dataset from the link and save it in the same directory as your code. Next we import all the required modules:
End of explanation
"""
df.head()
df.dropna(how="all",axis=1,inplace=True)
df.dropna(how="all",axis=0,inplace=True)
"""
Explanation: We will use the AirQualityUCI.csv file as our dataset. It is a ';' seperated file so we'll specify it as a parameter for the read_csv function. We'll also use parse_dates parameter so that pandas recognizes the 'Date' and 'Time' columns and format them accordingly
End of explanation
"""
df.shape
"""
Explanation: The data contains null values. So we drop those rows and columns containing nulls.
End of explanation
"""
df = df[:9357]
df.tail()
cols = list(df.columns[2:])
"""
Explanation: The last few lines(specifically 9357 to 9471) of the dataset are empty and of no use. So we'll ignore them too:
End of explanation
"""
for col in cols:
if df[col].dtype != 'float64':
str_x = pd.Series(df[col]).str.replace(',','.')
float_X = []
for value in str_x.values:
fv = float(value)
float_X.append(fv)
df[col] = pd.DataFrame(float_X)
df.head()
features=list(df.columns)
"""
Explanation: If you might have noticed, the values in our data don't contain decimal places but have weird commas in place of them. For example 9.4 is written as 9,4. We'll correct it using the following piece of code:
End of explanation
"""
features.remove('Date')
features.remove('Time')
features.remove('PT08.S4(NO2)')
X = df[features]
y = df['C6H6(GT)']
"""
Explanation: We will define our features and ignore those that might not be of help in our prediction. For example, date is not a very useful feature that can assist in predicting the future values.
End of explanation
"""
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y, test_size=0.4, random_state=0)
print(X_train.shape, y_train.shape)
"""
Explanation: Here we will try to predict the C6H6(GT) values. Hence we set it as our target variables
We split the dataset to 60% training and 40% testing sets.
End of explanation
"""
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
"""
Explanation: Regression
Please see the previous examples for better explanations. We have already implemented Decision Tree Regression and Random Forest Regression to predict the Electrical Energy Output.
Decision tree regression
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
"""
Explanation: Random forest regression
End of explanation
"""
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_predictions = regressor.predict(X_test)
print('R-squared:', regressor.score(X_test, y_test))
"""
Explanation: Linear Regression
End of explanation
"""
scores = cross_val_score(regressor, X, y, cv=5)
print ("Average of scores: ", scores.mean())
print ("Cross validation scores: ", scores)
"""
Explanation: The R-squared score of 1 indicates that 100 percent of the variance in the test set is explained by the model. The performance can change if a different set of 75 percent of the data is partitioned to the training set. Hence Cross-validation can be used to produce a better estimate of the estimator's performance. Each cross-validation round trains and tests different partitions of the data to reduce variability.
End of explanation
"""
# Scaling the features using StandardScaler:
X_scaler = StandardScaler()
y_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X_train)
y_train = y_scaler.fit_transform(y_train)
X_test = X_scaler.transform(X_test)
y_test = y_scaler.transform(y_test)
regressor = SGDRegressor(loss='squared_loss')
scores = cross_val_score(regressor, X_train, y_train, cv=5)
print ('Cross validation r-squared scores:', scores)
print ('Average cross validation r-squared score:', np.mean(scores))
regressor.fit_transform(X_train, y_train)
print ('Test set r-squared score', regressor.score(X_test, y_test))
"""
Explanation: Let's inspect some of the model's predictions and plot the true quality scores against the predicted scores:
Fitting models with gradient descent
Gradient descent is an optimization algorithm that can be used to estimate the local minimum of a function.
We can use gradient descent to find the values of the model's parameters that minimize the value of the cost function. Gradient descent iteratively updates the values of the model's parameters by calculating the partial derivative of the cost function at each step. Although the calculus behind the cost function is not entirely required to implement it with scikit-learn, having an intuition for how gradient descent will always help to you use it effectively.
There are two varieties of gradient descent that are distinguished by the number of training instances that are used to update the model parameters in each training iteration.
Batch gradient descent, which is sometimes called only gradient descent, uses all of the training instances to update the model parameters in each iteration.
Stochastic Gradient Descent (SGD), in contrast, updates the parameters using only a single training instance in each iteration. The training instance is usually selected randomly, hence the name stochastic. Stochastic gradient descent is often preferred to optimize cost functions when there are large number of training instances, as it will converge more quickly than batch gradient descent.
Batch gradient descent is a deterministic algorithm, and will produce the exact same parameter values given the same training set. As a random algorithm, SGD can produce different parameter estimates each time it is run. SGD may not minimize the cost function as well as gradient descent because it uses only single training instances to update the weights.
SGDRegressor
Here we use Stochastic Gradient Descent to estimate the parameters of a model with
Scikit-Learn. SGDRegressor is an implementation of SGD that can be used even for
regression problems with large number of features. It can be used
to optimize different cost functions to fit different linear models; by default, it will
optimize the residual sum of squares.
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.25, random_state=33)
"""
Explanation: Selecting the best features
End of explanation
"""
df.columns
feature_names = list(df.columns[2:])
feature_names.remove('PT08.S4(NO2)')
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.feature_selection import *
fs=SelectKBest(score_func=f_regression,k=5)
X_new=fs.fit_transform(X_train,y_train)
print((fs.get_support(),feature_names))
x_min, x_max = X_new[:,0].min(), X_new[:, 0].max()
y_min, y_max = y_train.min(), y_train.max()
fig=plt.figure()
#fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# Two subplots, unpack the axes array immediately
fig, axes = plt.subplots(1,5)
fig.set_size_inches(12,12)
for i in range(5):
axes[i].set_aspect('equal')
axes[i].set_title('Feature ' + str(i))
axes[i].set_xlabel('Feature')
axes[i].set_ylabel('Target')
axes[i].set_xlim(x_min, x_max)
axes[i].set_ylim(y_min, y_max)
plt.sca(axes[i])
plt.scatter(X_new[:,i],y_train)
"""
Explanation: Sometimes there are a lot of features in the dataset, so before learning, we should try to see which features are more relevant for our learning task, i.e. which of them are better prize predictors. We will use the SelectKBest method from the feature_selection package, and plot the results.
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scalerX = StandardScaler().fit(X_train)
scalery = StandardScaler().fit(y_train)
X_train = scalerX.transform(X_train)
y_train = scalery.transform(y_train)
X_test = scalerX.transform(X_test)
y_test = scalery.transform(y_test)
print(np.max(X_train), np.min(X_train), np.mean(X_train), np.max(y_train), np.min(y_train), np.mean(y_train))
"""
Explanation: In regression tasks, is very important to normalize data (to avoid that large-valued features weight too much in the final result)
End of explanation
"""
from sklearn.cross_validation import *
def train_and_evaluate(clf, X_train, y_train):
clf.fit(X_train, y_train)
print ("Coefficient of determination on training set:",clf.score(X_train, y_train))
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(X_train.shape[0], 5, shuffle=True, random_state=33)
scores = cross_val_score(clf, X_train, y_train, cv=cv)
print ("Average coefficient of determination using 5-fold crossvalidation:",np.mean(scores))
from sklearn import linear_model
clf_sgd = linear_model.SGDRegressor(loss='squared_loss', penalty=None, random_state=42)
train_and_evaluate(clf_sgd,X_train,y_train)
print( clf_sgd.coef_)
"""
Explanation: A Linear Model
Let's try a lineal model, SGDRegressor, that tries to find the hyperplane that minimizes a certain loss function (typically, the sum of squared distances from each instance to the hyperplane). It uses Stochastic Gradient Descent to find the minimum.
Regression poses an additional problem: how should we evaluate our results? Accuracy is not a good idea, since
we are predicting real values, as it is almost impossible for us to predict exactly the final value. There are several measures that can be used. The most common is the R2 score, or coefficient of determination that measures the proportion of the outcomes variation explained by the model, and is the default score function for regression methods in scikit-learn. This score reaches its maximum value of 1 when the model perfectly predicts all the test target values.
End of explanation
"""
clf_sgd1 = linear_model.SGDRegressor(loss='squared_loss', penalty='l2', random_state=42)
train_and_evaluate(clf_sgd1,X_train,y_train)
clf_sgd2 = linear_model.SGDRegressor(loss='squared_loss', penalty='l1', random_state=42)
train_and_evaluate(clf_sgd2,X_train,y_train)
"""
Explanation: You probably noted the penalty=None parameter when we called the method. The penalization parameter for linear regression methods is introduced to avoid overfitting. It does this by penalizing those hyperplanes having some of their coefficients too large, seeking hyperplanes where each feature contributes more or less the same to the predicted value. This parameter is generally the L2 norm (the squared sums of the coefficients) or the L1 norm (that is the sum of the absolute value of the coefficients). Let's see how our model works if we introduce an L2 or L1 penalty.
End of explanation
"""
from sklearn import svm
clf_svr= svm.SVR(kernel='linear')
train_and_evaluate(clf_svr,X_train,y_train)
clf_svr_poly= svm.SVR(kernel='poly')
train_and_evaluate(clf_svr_poly,X_train,y_train)
clf_svr_rbf= svm.SVR(kernel='rbf')
train_and_evaluate(clf_svr_rbf,X_train,y_train)
clf_svr_poly2= svm.SVR(kernel='poly',degree=2)
train_and_evaluate(clf_svr_poly2,X_train,y_train)
"""
Explanation: Support Vector Machines for regression
The regression version of SVM can be used instead to find the hyperplane (note how easy is to change the classification method in scikit-learn!). We will try a linear kernel, a polynomial kernel, and finally, a rbf kernel. For more information on kernels, see http://scikit-learn.org/stable/modules/svm.html#svm-kernels
End of explanation
"""
from sklearn import ensemble
clf_et=ensemble.ExtraTreesRegressor(n_estimators=10,random_state=42)
train_and_evaluate(clf_et,X_train,y_train)
"""
Explanation: Random Forests for Regression Analysis
Finally, let's try again Random Forests, in their Extra Trees, and Regression version
End of explanation
"""
imp_features = (np.sort((clf_et.feature_importances_,features),axis=0))
for rank,f in zip(imp_features[0],imp_features[1]):
print("{0:.3f} <-> {1}".format(float(rank), f))
"""
Explanation: An interesting side effect of random forest classification, is that you can measure how 'important' each feature is when predicting the final result
End of explanation
"""
from sklearn import metrics
def measure_performance(X,y,clf, show_accuracy=True,
show_classification_report=True,
show_confusion_matrix=True,
show_r2_score=False):
y_pred=clf.predict(X)
if show_accuracy:
print ("Accuracy:{0:.3f}".format(metrics.accuracy_score(y,y_pred)),"\n")
if show_classification_report:
print ("Classification report")
print (metrics.classification_report(y,y_pred),"\n")
if show_confusion_matrix:
print ("Confusion matrix")
print (metrics.confusion_matrix(y,y_pred),"\n")
if show_r2_score:
print ("Coefficient of determination:{0:.3f}".format(metrics.r2_score(y,y_pred)),"\n")
measure_performance(X_test,y_test,clf_et,
show_accuracy=False,
show_classification_report=False,
show_confusion_matrix=False,
show_r2_score=True)
"""
Explanation: Finally, evaluate our classifiers on the testing set
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/end_to_end_ml/solutions/serving_babyweight.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
# Check your project name
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
os.environ["BUCKET"] = "your-bucket-id-here" # Recommended: use your project name
"""
Explanation: Building an App Engine app to serve ML predictions
Learning Objectives
Deploy a web application that consumes your model service on Cloud AI Platform.
Introduction
Verify that you have previously Trained your Keras model and Deployed it predicting with Keras model on Cloud AI Platform. If not, go back to train_keras_ai_platform_babyweight.ipynb and deploy_keras_ai_platform_babyweight.ipynb create them.
In the previous notebook, we deployed our model to CAIP. In this notebook, we'll make a Flask app to show how our models can interact with a web application which could be deployed to App Engine with the Flexible Environment.
Step 1: Review Flask App code in application folder
Let's start with what our users will see. In the application folder, we have prebuilt the components for web application. In the templates folder, the <a href="application/templates/index.html">index.html</a> file is the visual GUI our users will make predictions with.
It works by using an HTML form to make a POST request to our server, passing along the values captured by the input tags.
The form will render a little strangely in the notebook since the notebook environment does not run javascript, nor do we have our web server up and running. Let's get to that!
Step 2: Set environment variables
End of explanation
"""
%%bash
# TODO 1: Deploy a web application that consumes your model service on Cloud AI Platform
gsutil -m rm -r gs://$BUCKET/baby_app
gsutil -m cp -r application/ gs://$BUCKET/baby_app
"""
Explanation: Step 3: Complete application code in application/main.py
We can set up our server with python using Flask. Below, we've already built out most of the application for you.
The @app.route() decorator defines a function to handle web reqests. Let's say our website is www.example.com. With how our @app.route("/") function is defined, our sever will render our <a href="application/templates/index.html">index.html</a> file when users go to www.example.com/ (which is the default route for a website).
So, when a user pings our server with www.example.com/predict, they would use @app.route("/predict", methods=["POST"]) to make a prediction. The data that gets sent over the internet isn't a dictionary, but a string like below:
name1=value1&name2=value2 where name corresponds to the name on the input tag of our html form, and the value is what the user entered. Thankfully, Flask makes it easy to transform this string into a dictionary with request.form.to_dict(), but we still need to transform the data into a format our model expects. We've done this with the gender2str and the plurality2str utility functions.
Ok! Let's set up a webserver to take in the form inputs, process them into features, and send these features to our model on Cloud AI Platform to generate predictions to serve to back to users.
Fill in the TODO comments in <a href="application/main.py">application/main.py</a>. Give it a go first and review the solutions folder if you get stuck.
Note: AppEngine test configurations have already been set for you in the file <a href="application/app.yaml">application/app.yaml</a>. Review app.yaml documentation for additional configuration options.
Step 4: Deploy application
So how do we know that it works? We'll have to deploy our website and find out! Notebooks aren't made for website deployment, so we'll move our operation to the Google Cloud Shell.
By default, the shell doesn't have Flask installed, so copy over the following command to install it.
python3 -m pip install --user Flask==0.12.1
Next, we'll need to copy our web app to the Cloud Shell. We can use Google Cloud Storage as an inbetween.
End of explanation
"""
%%bash
echo rm -r baby_app/
echo mkdir baby_app/
echo gsutil cp -r gs://$BUCKET/baby_app ./
echo python3 baby_app/main.py
"""
Explanation: Run the below cell, and copy the output into the Google Cloud Shell
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/sandbox-3/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
BayesianTestsML/tutorial | Python/Bsignedrank.ipynb | gpl-3.0 | import numpy as np
scores = np.loadtxt('Data/accuracy_nbc_aode.csv', delimiter=',', skiprows=1, usecols=(1, 2))
names = ("NBC", "AODE")
"""
Explanation: Bayesian Signed-Rank Test
Module signrank in bayesiantests computes the Bayesian equivalent of the Wilcoxon signed-rank test. It returns probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence.
This notebook demonstrates the use of the module.
We will load the classification accuracies of the naive Bayesian classifier and AODE on 54 UCI datasets from the file Data/accuracy_nbc_aode.csv. For simplicity, we will skip the header row and the column with data set names.
End of explanation
"""
import bayesiantests as bt
left, within, right = bt.signrank(scores, rope=0.01,rho=1/10)
print(left, within, right)
"""
Explanation: Functions in the module accept the following arguments.
x: a 2-d array with scores of two models (each row corresponding to a data set) or a vector of differences.
rope: the region of practical equivalence. We consider two classifiers equivalent if the difference in their performance is smaller than rope.
prior_strength: the prior strength for the Dirichlet distribution. Default is 0.6.
prior_place: the region into which the prior is placed. Default is bayesiantests.ROPE, the other options are bayesiantests.LEFT and bayesiantests.RIGHT.
nsamples: the number of Monte Carlo samples used to approximate the posterior.
names: the names of the two classifiers; if x is a vector of differences, positive values mean that the second (right) model had a higher score.
Summarizing probabilities
Function signrank(x, rope, prior_strength=0.6, prior_place=ROPE, nsamples=50000, verbose=False, names=('C1', 'C2')) computes the Bayesian signed-rank test and returns the probabilities that the difference (the score of the first classifier minus the score of the first) is negative, within rope or positive.
End of explanation
"""
left, within, right = bt.signrank(scores, rope=0.01, verbose=True, names=names)
"""
Explanation: The first value (left) is the probability that the first classifier (the left column of x) has a higher score than the second (or that the differences are negative, if x is given as a vector).
In the above case, the right (AODE) performs worse than naive Bayes with a probability of 0.88, and they are practically equivalent with a probability of 0.12.
If we add arguments verbose and names, the function also prints out the probabilities.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
samples = bt.signrank_MC(scores, rope=0.01)
fig = bt.plot_posterior(samples,names)
plt.show()
"""
Explanation: The posterior distribution can be plotted out:
1. using the function signrank_MC(x, rope, prior_strength=1, prior_place=ROPE, nsamples=50000) we generate the samples of the posterior
2. using the function plot_posterior(samples,names=('C1', 'C2')) we then plot the posterior in the probability simplex
End of explanation
"""
samples = bt.signrank_MC(scores, rope=0.01, prior_strength=0.6, prior_place=bt.LEFT)
fig = bt.plot_posterior(samples,names)
plt.show()
"""
Explanation: Checking sensitivity to the prior
To check the effect of the prior, let us a put a greater prior on the left.
End of explanation
"""
samples = bt.signrank_MC(scores, rope=0.01, prior_strength=0.6, prior_place=bt.RIGHT)
fig = bt.plot_posterior(samples,names)
plt.show()
"""
Explanation: ... and on the right
End of explanation
"""
samples = bt.signrank_MC(scores, rope=0.01, prior_strength=6, prior_place=bt.LEFT)
fig = bt.plot_posterior(samples,names)
plt.show()
"""
Explanation: The prior with a strength of 1 has negligible effect. Only a much stronger prior on the left would shift the probabilities toward NBC:
End of explanation
"""
|
cmawer/pycon-2017-eda-tutorial | notebooks/1-RedCard-EDA/4-Redcard-final-joins.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format='retina'
from __future__ import absolute_import, division, print_function
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.pyplot import GridSpec
import seaborn as sns
import numpy as np
import pandas as pd
import os, sys
from tqdm import tqdm
import warnings
warnings.filterwarnings('ignore')
sns.set_context("poster", font_scale=1.3)
import missingno as msno
import pandas_profiling
from sklearn.datasets import make_blobs
import time
"""
Explanation: Redcard Exploratory Data Analysis
This dataset is taken from a fantastic paper that looks to see how analytical choices made by different data science teams on the same dataset in an attempt to answer the same research question affect the final outcome.
Many analysts, one dataset: Making transparent how variations in analytical choices affect results
The data can be found here.
The Task
Do an Exploratory Data Analysis on the redcard dataset. Keeping in mind the question is the following: Are soccer referees more likely to give red cards to dark-skin-toned players than light-skin-toned players?
Before plotting/joining/doing something, have a question or hypothesis that you want to investigate
Draw a plot of what you want to see on paper to sketch the idea
Write it down, then make the plan on how to get there
How do you know you aren't fooling yourself
What else can I check if this is actually true?
What evidence could there be that it's wrong?
End of explanation
"""
# Uncomment one of the following lines and run the cell:
# df = pd.read_csv("redcard.csv.gz", compression='gzip')
# df = pd.read_csv("https://github.com/cmawer/pycon-2017-eda-tutorial/raw/master/data/redcard/redcard.csv.gz", compression='gzip')
def save_subgroup(dataframe, g_index, subgroup_name, prefix='raw_'):
save_subgroup_filename = "".join([prefix, subgroup_name, ".csv.gz"])
dataframe.to_csv(save_subgroup_filename, compression='gzip', encoding='UTF-8')
test_df = pd.read_csv(save_subgroup_filename, compression='gzip', index_col=g_index, encoding='UTF-8')
# Test that we recover what we send in
if dataframe.equals(test_df):
print("Test-passed: we recover the equivalent subgroup dataframe.")
else:
print("Warning -- equivalence test!!! Double-check.")
def load_subgroup(filename, index_col=[0]):
return pd.read_csv(filename, compression='gzip', index_col=index_col)
clean_players = load_subgroup("cleaned_players.csv.gz")
players = load_subgroup("raw_players.csv.gz", )
countries = load_subgroup("raw_countries.csv.gz")
referees = load_subgroup("raw_referees.csv.gz")
agg_dyads = pd.read_csv("raw_dyads.csv.gz", compression='gzip', index_col=[0, 1])
# tidy_dyads = load_subgroup("cleaned_dyads.csv.gz")
tidy_dyads = pd.read_csv("cleaned_dyads.csv.gz", compression='gzip', index_col=[0, 1])
"""
Explanation: About the Data
The dataset is available as a list with 146,028 dyads of players and referees and includes details from players, details from referees and details regarding the interactions of player-referees. A summary of the variables of interest can be seen below. A detailed description of all variables included can be seen in the README file on the project website.
From a company for sports statistics, we obtained data and profile photos from all soccer players (N = 2,053) playing in the first male divisions of England, Germany, France and Spain in the 2012-2013 season and all referees (N = 3,147) that these players played under in their professional career (see Figure 1). We created a dataset of playerâreferee dyads including the number of matches players and referees encountered each other and our dependent variable, the number of red cards given to a player by a particular referee throughout all matches the two encountered each other.
-- https://docs.google.com/document/d/1uCF5wmbcL90qvrk_J27fWAvDcDNrO9o_APkicwRkOKc/edit
| Variable Name: | Variable Description: |
| -- | -- |
| playerShort | short player ID |
| player | player name |
| club | player club |
| leagueCountry | country of player club (England, Germany, France, and Spain) |
| height | player height (in cm) |
| weight | player weight (in kg) |
| position | player position |
| games | number of games in the player-referee dyad |
| goals | number of goals in the player-referee dyad |
| yellowCards | number of yellow cards player received from the referee |
| yellowReds | number of yellow-red cards player received from the referee |
| redCards | number of red cards player received from the referee |
| photoID | ID of player photo (if available) |
| rater1 | skin rating of photo by rater 1 |
| rater2 | skin rating of photo by rater 2 |
| refNum | unique referee ID number (referee name removed for anonymizing purposes) |
| refCountry | unique referee country ID number |
| meanIAT | mean implicit bias score (using the race IAT) for referee country |
| nIAT | sample size for race IAT in that particular country |
| seIAT | standard error for mean estimate of race IAT |
| meanExp | mean explicit bias score (using a racial thermometer task) for referee country |
| nExp | sample size for explicit bias in that particular country |
| seExp | standard error for mean estimate of explicit bias measure |
End of explanation
"""
!conda install pivottablejs -y
from pivottablejs import pivot_ui
clean_players = load_subgroup("cleaned_players.csv.gz")
temp = tidy_dyads.reset_index().set_index('playerShort').merge(clean_players, left_index=True, right_index=True)
temp.shape
# This does not work on Azure notebooks out of the box
# pivot_ui(temp[['skintoneclass', 'position_agg', 'redcard']], )
# How many games has each player played in?
games = tidy_dyads.groupby(level=1).count()
sns.distplot(games);
(tidy_dyads.groupby(level=0)
.count()
.sort_values('redcard', ascending=False)
.rename(columns={'redcard':'total games refereed'})).head()
(tidy_dyads.groupby(level=0)
.sum()
.sort_values('redcard', ascending=False)
.rename(columns={'redcard':'total redcards given'})).head()
(tidy_dyads.groupby(level=1)
.sum()
.sort_values('redcard', ascending=False)
.rename(columns={'redcard':'total redcards received'})).head()
tidy_dyads.head()
tidy_dyads.groupby(level=0).size().sort_values(ascending=False)
total_ref_games = tidy_dyads.groupby(level=0).size().sort_values(ascending=False)
total_player_games = tidy_dyads.groupby(level=1).size().sort_values(ascending=False)
total_ref_given = tidy_dyads.groupby(level=0).sum().sort_values(ascending=False,by='redcard')
total_player_received = tidy_dyads.groupby(level=1).sum().sort_values(ascending=False, by='redcard')
sns.distplot(total_player_received, kde=False);
sns.distplot(total_ref_given, kde=False);
tidy_dyads.groupby(level=1).sum().sort_values(ascending=False, by='redcard').head()
tidy_dyads.sum(), tidy_dyads.count(), tidy_dyads.sum()/tidy_dyads.count()
player_ref_game = (tidy_dyads.reset_index()
.set_index('playerShort')
.merge(clean_players,
left_index=True,
right_index=True)
)
player_ref_game.head()
player_ref_game.shape
bootstrap = pd.concat([player_ref_game.sample(replace=True,
n=10000).groupby('skintone').mean()
for _ in range(100)])
ax = sns.regplot(bootstrap.index.values,
y='redcard',
data=bootstrap,
lowess=True,
scatter_kws={'alpha':0.4,},
x_jitter=(0.125 / 4.0))
ax.set_xlabel("Skintone");
"""
Explanation: Joining and further considerations
End of explanation
"""
|
karlstroetmann/Formal-Languages | ANTLR4-Python/AST-2-Dot.ipynb | gpl-2.0 | import graphviz as gv
"""
Explanation: Drawing Abstract Syntax Trees with GraphViz
End of explanation
"""
def tuple2dot(t):
dot = gv.Digraph('Abstract Syntax Tree')
Nodes_2_Names = {}
assign_numbers((), t, Nodes_2_Names)
create_nodes(dot, (), t, Nodes_2_Names)
return dot
"""
Explanation: The function tuple2dot takes a nested tuple t as its argument. This nested tuple is interpreted as an
abstract syntax tree. This tree is visualized using graphviz.
End of explanation
"""
def assign_numbers(address, t, Nodes2Numbers, n=0):
Nodes2Numbers[address] = str(n)
if isinstance(t, str) or isinstance(t, int):
return n + 1
n += 1
j = 1
for t in t[1:]:
n = assign_numbers(address + (j,), t, Nodes2Numbers, n)
j += 1
return n
"""
Explanation: The function assign_numbers takes three arguments:
- t is a nested tuple that is interpreted as a tree,
- Nodes2Numbers is dictionary,
- n is a natural number.
Given a tree t that is represented as a nested tuple, the function assign_numbers assigns a unique natural number
to every node of t. This assignment is stored in the dictionary Nodes2Numbers. n is the first natural number
that is used. The function returns the smallest natural number that is still unused.
End of explanation
"""
def create_nodes(dot, a, t, Nodes_2_Names):
root = Nodes_2_Names[a]
if isinstance(t, str) or isinstance(t, int):
dot.node(root, label=str(t))
return
dot.node(root, label=t[0])
j = 1
for c in t[1:]:
child = Nodes_2_Names[a + (j,)]
dot.edge(root, child)
create_nodes(dot, a + (j,), c, Nodes_2_Names)
j += 1
"""
Explanation: The function create_nodes takes three arguments:
- dot is an object of class graphviz.digraph,
- t is an abstract syntax tree represented as a nested tuple.
- Nodes_2_Namesis a dictionary mapping nodes in t to unique names
that can be used as node names in graphviz.
The function creates the nodes in t and connects them via directed edges so that t is represented as a tree.
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/expression.derivation.ipynb | gpl-3.0 | import vcsn
from IPython.display import Latex
def diffs(r, ss):
eqs = []
for s in ss:
eqs.append(r'\frac{{\partial}}{{\partial {0}}} {1}& = {2}'
.format(s,
r.format('latex'),
r.derivation(s).format('latex')))
return Latex(r'''\begin{{aligned}}
{0}
\end{{aligned}}'''.format(r'\\'.join(eqs)))
"""
Explanation: expression.derivation(label,breaking=False)
Compute the derivation of a weighted expression.
Arguments:
- label: the (non empty) string to derive the expression with.
- breaking: whether to split the result.
See also:
- expression.derived_term
- expression.expansion
- polynomial.split
References:
- lombardy.2005.tcs defines the derivation
- angrand.2010.jalc defines the breaking derivation
Examples
The following function will prove handy: it takes a rational expression and a list of strings, and returns a $\LaTeX$ aligned environment to display nicely the result.
End of explanation
"""
b = vcsn.context('lal_char(ab), b')
r = b.expression('[ab]{3}')
r.derivation('a')
"""
Explanation: Classical expressions
In the classical case (labels are letters, and weights are Boolean), this is the construct as described by Antimirov.
End of explanation
"""
diffs(r, ['a', 'aa', 'aaa', 'aaaa'])
"""
Explanation: Or, using the diffs function we defined above:
End of explanation
"""
q = vcsn.context('lal_char(abc), q')
r = q.expression('(<1/6>a*+<1/3>b*)*')
diffs(r, ['a', 'aa', 'ab', 'b', 'ba', 'bb'])
"""
Explanation: Weighted Expressions
Of course, expressions can be weighted.
End of explanation
"""
r.derived_term()
"""
Explanation: And this is tightly connected with the construction of the derived-term automaton.
End of explanation
"""
r = q.expression('[ab](<2>[ab])', 'associative')
r
r.derivation('a')
r.derivation('a', True)
r.derivation('a').split()
"""
Explanation: Breaking derivation
The "breaking" derivation "splits" the polynomial at the end.
End of explanation
"""
r.derived_term()
r.derived_term('breaking_derivation')
"""
Explanation: Again, this is tightly connected with both flavors of the derived-term automaton.
End of explanation
"""
|
ihmeuw/dismod_mr | examples/export_csv.ipynb | agpl-3.0 | !wget http://ghdx.healthdata.org/sites/default/files/record-attached-files/IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
!unzip IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
# This Python code will export predictions
# for the following region/sex/year:
predict_region = 'USA'
predict_sex = 'male'
predict_year = 2005
# import dismod code
import dismod_mr
"""
Explanation: Getting estimates out of DisMod-MR
The goal of this document is to demonstrate how to export age-specific prevalence estimates from DisMod-MR in a comma-separated value (CSV) format, for use in subsequent analysis.
It uses data from the replication dataset for regional estimates of HCV prevalence, as published in Mohd Hanafiah K, Groeger J, Flaxman AD, Wiersma ST. Global epidemiology of hepatitis C virus infection: New estimates of age-specific antibody to HCV seroprevalence. Hepatology. 2013 Apr;57(4):1333-42. doi: 10.1002/hep.26141. Epub 2013 Feb 4. http://www.ncbi.nlm.nih.gov/pubmed/23172780
The dataset is available from: http://ghdx.healthdata.org/sites/default/files/record-attached-files/IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
wget http://ghdx.healthdata.org/sites/default/files/record-attached-files/IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
unzip IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
End of explanation
"""
model_path = 'hcv_replication/'
dm = dismod_mr.data.load(model_path)
if predict_year == 2005:
dm.keep(areas=[predict_region], sexes=['total', predict_sex], start_year=1997)
elif predict_year == 1990:
dm.keep(areas=[predict_region], sexes=['total', predict_sex], end_year=1997)
else:
raise(Error, 'predict_year must equal 1990 or 2005')
# Fit model using the data subset (faster, but no borrowing strength)
dm.vars += dismod_mr.model.process.age_specific_rate(dm, 'p', predict_region, predict_sex, predict_year)
%time dismod_mr.fit.asr(dm, 'p', iter=2000, burn=1000, thin=1)
# Make posterior predictions
pred = dismod_mr.model.covariates.predict_for(
dm, dm.parameters['p'],
predict_region, predict_sex, predict_year,
predict_region, predict_sex, predict_year, True, dm.vars['p'], 0, 1)
"""
Explanation: Load the model, and keep only data for the prediction region/sex/year
End of explanation
"""
import pandas as pd
# This generates a csv with 1000 rows,
# one for each draw from the posterior distribution
# Each column corresponds to a one-year age group,
# e.g. column 10 is prevalence at age 10
pd.DataFrame(pred).to_csv(
model_path + '%s-%s-%s.csv'%(predict_region, predict_sex, predict_year))
!ls -hal hcv_replication/$predict_region-*.csv
"""
Explanation: The easiest way to get these predictions into a csv file is to use the Python Pandas package:
End of explanation
"""
weights = [1, 8, 8, 9, 9, 10, 10, 10, 10, 10,
10, 10, 10, 10, 9, 9, 9, 9, 9, 9,
9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
9, 9, 8, 8, 8, 8, 8, 8, 8, 8,
8, 7, 7, 7, 7, 7, 7, 7, 7, 7,
6, 6, 6, 6, 6, 6, 5, 5, 5, 5,
5, 5, 4, 4, 4, 4, 4, 4, 4, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
# 1000 samples from the posterior distribution for age-standardized prevalence
import numpy as np, matplotlib.pyplot as plt
age_std = np.dot(pred, weights) / np.sum(weights)
plt.hist(age_std, color='#cccccc', density=True)
plt.xlabel('Age-standardized Prevalence')
plt.ylabel('Posterior Probability');
"""
Explanation: To aggregate this into pre-specified age categories, you need to specify the age weights and groups:
End of explanation
"""
import pymc as mc
print('age_std prev mean:', age_std.mean())
print('age_std prev 95% UI:', mc.utils.hpd(age_std, .05))
"""
Explanation: You can extract an age-standardized point and interval estimate from the 1000 draws from the posterior distribution stored in age_std as follows: (to do this for crude prevalence, use the population weights to average, instead of standard weights.)
End of explanation
"""
group_cutpoints = [0, 1, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 100]
results = []
for a0, a1 in zip(group_cutpoints[:-1], group_cutpoints[1:]):
age_grp = np.dot(pred[:, a0:a1], weights[a0:a1]) / np.sum(weights[a0:a1])
results.append(dict(a0=a0,a1=a1,mu=age_grp.mean(),std=age_grp.std()))
results = pd.DataFrame(results)
print(np.round(results.head(), 2))
plt.errorbar(.5*(results.a0+results.a1), results.mu,
xerr=.5*(results.a1-results.a0),
yerr=1.96*results['std'],
fmt='ks', capsize=0, mec='w')
plt.axis(ymin=0, xmax=100);
!rm IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
!rm -r hcv_replication/
!date
"""
Explanation: For groups, just do the same thing group by group:
End of explanation
"""
|
kfollette/ASTR200-Spring2017 | Labs/Lab11/Lab11.ipynb | mit | #data from: http://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=planets
#download table -> csv format, all rows, all columns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as st
# these set the pandas defaults so that it will print ALL values, even for very long lists and large dataframes
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
"""
Explanation: Some of the code in this lab is based on a program written by Dmitry Savransky (Cornell) to parse the IPAC exoplanet table
Names: [Insert Your Names Here]
Lab 11 - Data Investigation 2 (Week 1) - Exoplanet Database
Lab 11 Contents
Introduction to the Exoplanet Database
Adding a Third Dimension to Scatterplots
Computing Correlation Coefficients
Data Investigation 2 - Week 2 Instructions
End of explanation
"""
#read in the data, skipping the first 73 rows of ancillary information
data=pd.read_csv('planets032717.csv', skiprows=73)
#print the columns.
data.columns
#this truncates to only planet detection methods with >30 successful detections (skip if you want all of them)
methods,methods_inds,methods_counts = np.unique(data['pl_discmethod'],return_index=True,return_counts=True)
methods = methods[methods_counts> 30]
methods
data
#find the indices of all entries where pl_discmethod is one of these four
inds = [j for j in range(len(data)) if data['pl_discmethod'][j] in methods]
#write a new dataframe with just these entries
data = data.loc[inds]
##note this cell can't be run twice because it redefines (overwrites) the data dataframe.
#If you need to reexecute, start at the beginning.
#note that the table is now shorter
len(data)
"""
Explanation: 1. Introduction and Preliminaries
This lab will use the publicly-available IPAC exoplanet database. This is a constantly updated database that contains all measured properties of confirmed extrasolar planets as well as some properties of their host stars. For descriptions of the columns/quantities in the databse and their units, see the living table
End of explanation
"""
#Some setup to plot stuff
#List of symbol styles (uses shorthand that you can find described here: http://matplotlib.org/api/markers_api.html)
syms = 'os^pvD<>8*'
#color list in [red, green, blue] format
cmap = [[0,0,1],[1,0,0],[0.1,1,0.1],[1,0.6,0],
[0.75,0.5,0],[1,0.75,0.8],[0.75,0,0.75],
[0.7,0.75,1],[0.85,0.85,0.85],[0,0.75,1]]
#Some info about solar system planets
planetnames = ['Mercury','Venus','Earth','Mars','Jupiter','Saturn','Uranus','Neptune','Pluto']
#planet masses in 10^24kg
Ms=np.array([0.33,4.87,5.97,0.642,1898,568,86.8,102,0.0146])
#to Jupiter masses
Ms = Ms/Ms[4]
#Planet radii in km
Rs = np.array([2440.,6052.,6378.,3397.,71492.,60268.,25559.,24766.,24766.])
#to Jupiter radii
Rs = Rs/Rs[4]
#planet semi-major axes in AU
smas = np.array([0.3871,0.7233, 1,1.524,5.203,9.539,19.19,30.06,39.48])
#placement info for planet name labels on plot. All relative to point
has = ['left','right','right','left','left','left','right','right','left']
offs = [(0,0),(-4,-12),(-4,4),(0,0),(6,-4),(5,1),(-6,-8),(-5,4),(0,0)]
# Plot Planet mass vs. semi-major axis (distance from star)
fig,ax = plt.subplots(figsize=(7,7))
#loop over all of the methods and their corresponding symbols and colors
for m,s,c in zip(methods,syms,cmap):
#find all of the indices for that method
inds = data['pl_discmethod'] == m
#pull the planet masses and semi-major axis for those entries
mj = data[inds]['pl_bmassj']
a = data[inds]['pl_orbsmax']
# make a scatterplot with symbols from symbol array s, colors from color array c, label=method m
ax.scatter(a,mj,marker=s,s=60,
facecolors=c,edgecolors=None,alpha=0.75,label=m)
#overplot solar system planets
ax.scatter(smas,Ms,marker='o',s=60,facecolors='yellow',edgecolors=None,alpha=1)
for a,m,n,ha,off in zip(smas,Ms,planetnames,has,offs):
#add label with planet name
ax.annotate(n,(a,m),ha=ha,xytext=off,textcoords='offset points')
#log scale axes
ax.set_xscale('log')
ax.set_yscale('log')
#set limits to reasonable ranges
ax.set_xlim([1e-2,1e3])
ax.set_ylim([1e-3,40])
#label axes and plot
ax.set_xlabel('Semi-Major Axis (AU)', fontsize=14)
ax.set_ylabel('(Minimum) Ma4s (M$_J$)', fontsize=14)
ax.legend(loc='lower right',scatterpoints=1,prop={'size':12})
#define a second axis on right for mass in earth masses
ax2 = ax.twinx()
ax2.set_yscale('log')
ax2.set_ylim(np.array(ax.get_ylim())/Ms[2])
ax2.set_ylabel('M$_\oplus$', fontsize=14)
"""
Explanation: 2. Adding a Third Dimension to Scatterplots
This part will walk you through how to take a third quantity (in this case planetary discovery method) and show it by giving various data categories different plotting symbols. This is a good method when you have one categorical variable and two continuous variables to plot.
End of explanation
"""
#code for plot goes here
"""
Explanation: <div class=hw>
### Exercise 1
--------------------
Manipulate the code above to answer the following questions, and describe the results in words. Integrate plots that you create into your explanations to support your answer, but do it by adding a save statement to the code above and saving a different file each time. In each case, make sure your new image "looks good" before saving it. Generally, this will mean manipulating things like the legend location, font size and axis limits. Before moving on to answering the next question, make sure to undo whatever manipulations you've made to the plot so that you start with a clean slate each time.
a) What happens when you don't filter the data to include only those methods with > 30 successful discoveries? List one or two interesting things that you notice in this view and 1 or two questions that it raises for you.
b) What happens when (now back to just four methods) you don't use a log scale for the axes? Manipulate the x and y axis ranges as best you can to make an informative plot with linear axis scales and then compare it to the log plot. What are the advantages and disadvantages of each?
c) Manipulate the plot until you can see ALL of the solar system planets (there are a few missing in this default view). List one or two interesting things that you notice in this view and 1 or two questions that it raises for you.
*This markdown cell is for part a*
*This markdown cell is for part b*
*This markdown cell is for part c*
<div class=hw>
### Exercise 2
--------------------
Using the code for the graphic above as a model, make a similar plot for planet mass (x-axis) and radius (y-axis). This time, include error bars (so use plt.errorbar instead of plt.scatter).
*Hints:
0) I suggest starting with making just a scatterplot that you like with plt.scatter, and then translating it to a plot with errorbars with plt.errorbar
1) The scatter keywords "facecolors" and "marker" are (functionally) the same as the errorbar keywords "color" and "fmt" so you'll want to "translate". The scatter keywords s and edgecolors shouldn't be necessary and won't be recognized by errorbar, so just remove them.
2) Note that errors in both mass AND radius are given in the table, so you should use the xerr ***and*** yerr keywords).
3) There are two errors given in the table for each quantity (mass, radius, etc) - a postitive error and a negative one (sometimes the uncertainty in a measurement is greater in one direction than the other). To specify asymmetric error bars, use the following basic syntax: plt.errorbar(x,y,xerr=[negerr,poserr],yerr=[negerr,poserr])
4) Since the negative errorbars are specified in the table with a negative sign and plt.errorbar's xerr and yerr keywords don't know how to handle negative numbers, you'll have to use the absolute value. In other words, the basic syntax is actually: plt.errorbar(x,y,xerr=[abs(negerr),poserr],yerr=[abs(negerr),poserr])*
End of explanation
"""
data.dtypes
#select only those columns that are numeric (these are the only ones that make sense to calculate correlations for)
data2=data.select_dtypes(exclude=["object"])
#np.corrcoef(data2.values,rowvar=0) # alternate syntax, but doesn't deal well with nans
data2.corr()
"""
Explanation: 3. Computing Correlation Coefficients
If you look at your mass/radius plot, it should be fairly clear to you that the two quantitites are correlated, meaning that there is a relationship between the value of one variable and the value of the other.
One way to seek out correlations in the entire dataset would be to plot every variable versus every other, and indeed we do this below, but scatterplots can only tell us so much, so let's talk about how to calculate the correlation coefficient "r" that we discussed in class.
End of explanation
"""
#create list of column names
cols = data2.columns
#create an empty list to store column indices for non-error columns
colidx=[]
#loop over column list
for j in np.arange(len(cols)):
#find all columns that don't contain "error", "lim" or "blend" in the name
if "err" not in cols[j] and "lim" not in cols[j] and "blend" not in cols[j]:
colidx.append(j)
#create a new dataframe with just these "non-error" columns
data3 = data2[colidx]
corr_mat = data3.corr()
#corr_mat
"""
Explanation: There are way too many things to look at here, so let's remove all of the error columns and those with "lim" or "blend" in the name (which are more complicated to understand than the other variables and are not as useful in this context) and just look at correlations between what's left
End of explanation
"""
#program goes here
"""
Explanation: <div class=hw>
### Exercise 3
---------------
Write a function that replaces any values in a dataframe less than a user-specified value with NaNs. For example, if the function were called isolate_vals and it were operating on the dataframe df, then isolate_vals(df, 0.3) would return a dataframe with all values < 0.3 replaces with NaNs.
Then, use that program to identify the strongest correlations in this matrix and describe them (look at the online table for descriptions of what the columnns are). Which ones do you think might be interesting/meaningful and why?
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("../custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: 4. Data Investigation 2 - Week 2 Instructions
Now that you are familar with the exoplanet database, you and your partner must come up with an investigation that you would like to complete using this data. It's completely up to you what you choose to investigate, but here are a few broad ideas to guide your thinking:
You might choose to isolate a population of interesting planets that you noticed in one of the plots and attempt to understand it (descriptive statistics, correlations, etc) and/or compare it to another population
You might make a quantitative comparsion of the TYPES of planets detected with different methods and connect this to the limitations of that method (e.g. what types of planets is it best at discovering and why?)
You might isolate a region of a plot with an apparent correlation and attempt to fit a model to it.
You might consider adding a fourth variable to one of the plots you made by sizing the points to represent that variable.
In all cases, I can provide suggestions and guidance, and would be happy to discuss at office hours or by appointment.
Before 5pm next Monday evening (4/11), you must send me a brief e-mail (that you write together, one e-mail per group) describing a plan for how you will approach a question that you have developed. What do you need to know that you don't know already? What kind of plots will you make and what kinds of statistics will you compute? What is your first thought for what your final data representations will look like?
End of explanation
"""
|
valter-lisboa/ufo-notebooks | Python3/ufo-sample-python3.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
"""
Explanation: USA UFO sightings (Python 3 version)
This notebook is based on the first chapter sample from Machine Learning for Hackers with some added features. I did this to present Jupyter Notebook with Python 3 for Tech Days in my Job.
The original link is offline so you need to download the file from the author's repository inside ../data form the r notebook directory.
I will assume the following questions need to be aswers;
- What is the best place to have UFO sightings on USA?
- What is the best month to have UFO sightings on USA?
Loading the data
This first session will handle with loading the main data file using Pandas.
End of explanation
"""
ufo = pd.read_csv(
'../data/ufo_awesome.tsv',
sep = "\t",
header = None,
dtype = object,
na_values = ['', 'NaN'],
error_bad_lines = False,
warn_bad_lines = False
)
"""
Explanation: Here we are loading the dataset with pandas with a minimal set of options.
- sep: once the file is in TSV format the separator is a <TAB> special character;
- na_values: the file have empty strings for NaN values;
- header: ignore any column as a header since the file lacks it;
- dtype: load the dataframe as objects, avoiding interpret the data types¹;
- error_bad_lines: ignore lines with more than the number of rows;
- warn_bad_lines: is set to false to avoid ugly warnings on the screen, activate this if you want to analyse the bad rows.
¹ Before start to make assumptions of the data I prefer load it as objects and then convert it after make sense of it. Also the data can be corrupted and make it impossible to cast.
End of explanation
"""
ufo.describe()
ufo.head()
"""
Explanation: With the data loaded in ufo dataframe, lets check it composition and first set of rows.
End of explanation
"""
ufo.columns = [
'DateOccurred',
'DateReported',
'Location',
'Shape',
'Duration',
'LongDescription'
]
"""
Explanation: The dataframe describe() show us how many itens (without NaN) each column have, how many are uniques, which is more frequent value, and how much this value appear. head() simply show us the first 5 rows (first is 0 on Python).
Dealing with metadata and column names
We need to handle the columns names, to do so is necessary to see the data document. The table bellow shows the fields details get from the metadata:
| Short name | Type | Description |
| ---------- | ---- | ----------- |
| sighted_at | Long | Date the event occurred (yyyymmdd) |
| reported_at | Long | Date the event was reported |
| location | String | City and State where event occurred |
| shape | String | One word string description of the UFO shape |
| duration | String | Event duration (raw text field) |
| description | String | A long, ~20-30 line, raw text description |
To keep in sync with the R example, we will set the columns names to the following values:
- DateOccurred
- DateReported
- Location
- Shape
- Duration
- LogDescription
End of explanation
"""
ufo.head()
"""
Explanation: Now we have a good looking dataframe with columns.
End of explanation
"""
ufo.drop(
labels = ['DateReported', 'Duration', 'Shape', 'LongDescription'],
axis = 1,
inplace = True
)
ufo.head()
"""
Explanation: Data Wrangling
Now we start to transform our data into something to analyse.
Keeping only necessary data
To decide about this lets get back to the questions to be answers.
The first one is about the better place on USA to have UFO sightings, for this we will need the Location column, and in some place in time we will make filters for it. The second question is about the better month to have UFO sightings, which will lead to the DateOccurred column.
Based on this Shape and LongDescription columns can be stripped high now (it's a bit obvious for the data relevance). But there is 2 others columns which can or cannot be removed, DataRepoted and Duration.
I always keep in mind to maintain, at last util second order, columns with some useful information to use it on further data wrangling or to get some statistical sense of it. Both columns have a date (in a YYYYDDMM year format) and a string which can possibly store some useful information if have data treatment to convert it in some numeric format. For the purpose of this demo, I removing it because DateReported will not be used further (the main purpose of the date is when the sight occurs and not when it was registered) and Duration is a relly mess and for a example to show on a Tech Day the effort to decompose it is not worthing.
The drop() command bellow have the following parameters:
- labels: columns to remove;
- axis: set to 1 to remove columns;
- inplace: set to True to modify the dataframe itself and return none.
End of explanation
"""
ufo['DateOccurred'] = pd.Series([
pd.to_datetime(
date,
format = '%Y%m%d',
errors='coerce'
) for date in ufo['DateOccurred']
])
ufo.describe()
"""
Explanation: Converting data
Now we are good to start the data transformation, the dates columns must be converted to Python date objects to allow manipulation of theirs time series.
The first problem will happens when trying to run this code using pandas.to_datetime() to convert the string:
python
ufo['DateOccurred'] = pd.Series([
pd.to_datetime(
date,
format = '%Y%m%d'
) for date in ufo['DateOccurred']
])
This will rise a serie of errors (stack trace) which is cause by this:
ValueError: time data '0000' does not match format '%Y%m%d' (match)
What happen here is bad data (welcome to the data science world, most of data will come corrupted, missing, wrong or with some other problem). Before proceed we need to deal with the dates with wrong format.
So what to do? Well we can make the to_datetime() method ignore the errors putting a NaT values on the field. Lets convert this and then see how the DataOccurred column will appear.
End of explanation
"""
ufo['DateOccurred'].isnull().sum()
"""
Explanation: The column now is a datetime object and have 60814 against the original 61069 elements, which shows some bad dates are gone. The following code show us how many elements was removed.
End of explanation
"""
ufo.isnull().sum()
ufo.dropna(
axis = 0,
inplace = True
)
ufo.isnull().sum()
ufo.describe()
"""
Explanation: There is no surprise that 60814 + 255 = 61069, we need to deal with this values too.
So we have a field DateOccurred with some NaN values. In this point we need to make a importante decision, get rid of the columns with NaN dates or fill it with something.
There is no universal guide to this, we could fill it with the mean of the column or copy the content of the DateReported column. But in this case the missing date is less then 0.5% of the total, so for the simplicity sakes we will simply drop all NaN values.
End of explanation
"""
ufo['Year'] = pd.DatetimeIndex(ufo['DateOccurred']).year
ufo['Month'] = pd.DatetimeIndex(ufo['DateOccurred']).month
ufo.head()
ufo['Month'].describe()
ufo['Year'].describe()
"""
Explanation: With the dataframe with clean dates, lets create another 2 columns to handle years and months in separate. This will make some analysis more easy (like discover which is the better month of year to look for UFO sights).
End of explanation
"""
sightings_by_year = ufo.groupby('Year').size().reset_index()
sightings_by_year.columns = ['Year', 'Sightings']
sightings_by_year.describe()
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
plt.style.use('seaborn-white')
%matplotlib inline
plt.xticks(rotation = 90)
sns.barplot(
data = sightings_by_year,
x = 'Year',
y = 'Sightings',
color= 'blue'
)
ax = plt.gca()
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5))
"""
Explanation: A funny thing about year is the most old sight is in 1762! This dataset includes sights from history.
How can this be significative? Well, to figure it out its time to plot some charts. The humans are visual beings and a picture really worth much more than a bunch of numbers and words.
To do so we will use the default matplotlib library from Python to build our graphs.
Analysing the years
Before start lets count the sights by year.
The comands bellow are equivalent to the following SQL code:
SQL
SELECT Year, count(*) AS Sightings
FROM ufo
GROUP BY Year
End of explanation
"""
ufo = ufo[ufo['Year'] >= 1900]
"""
Explanation: We can see the number of sightings is more representative after around 1900, so we will filter the dataframe for all year above this threshold.
End of explanation
"""
%matplotlib inline
new_sightings_by_year = ufo.groupby('Year').size().reset_index()
new_sightings_by_year.columns = ['Year', 'Sightings']
new_sightings_by_year.describe()
%matplotlib inline
plt.xticks(rotation = 90)
sns.barplot(
data = new_sightings_by_year,
x = 'Year',
y = 'Sightings',
color= 'blue'
)
ax = plt.gca()
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5))
"""
Explanation: Now lets see how the graph will behave
End of explanation
"""
locations = ufo['Location'].str.split(', ').apply(pd.Series)
ufo['City'] = locations[0]
ufo['State'] = locations[1]
"""
Explanation: Handling location
Here we will make two steps, first is splitting all locations is city and states, for USA only. Second is load a dataset having the latitude and longitude for each USA city for future merge.
End of explanation
"""
|
nmih/ssbio | docs/notebooks/GEM-PRO - Calculating Protein Properties.ipynb | mit | import sys
import logging
# Import the GEM-PRO class
from ssbio.pipeline.gempro import GEMPRO
# Printing multiple outputs per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
"""
Explanation: GEM-PRO - Calculating Protein Properties
This notebook gives an example of how to calculate protein properties for a list of proteins. The main features demonstrated are:
Information retrieval from UniProt and linking residue numbering sites to structure
Calculating or predicting global protein sequence and structure properties
Calculating or predicting local protein sequence and structure properties
<div class="alert alert-info">
**Input:** List of gene IDs
</div>
<div class="alert alert-info">
**Output:** Representative protein structures and properties associated with them
</div>
Imports
End of explanation
"""
# Create logger
logger = logging.getLogger()
logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #
# Other logger stuff for Jupyter notebooks
handler = logging.StreamHandler(sys.stderr)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M")
handler.setFormatter(formatter)
logger.handlers = [handler]
"""
Explanation: Logging
Set the logging level in logger.setLevel(logging.<LEVEL_HERE>) to specify how verbose you want the pipeline to be. Debug is most verbose.
CRITICAL
Only really important messages shown
ERROR
Major errors
WARNING
Warnings that don't affect running of the pipeline
INFO (default)
Info such as the number of structures mapped per gene
DEBUG
Really detailed information that will print out a lot of stuff
<div class="alert alert-warning">
**Warning:**
`DEBUG` mode prints out a large amount of information, especially if you have a lot of genes. This may stall your notebook!
</div>
End of explanation
"""
# SET FOLDERS AND DATA HERE
import tempfile
ROOT_DIR = tempfile.gettempdir()
PROJECT = 'ssbio_protein_properties'
LIST_OF_GENES = ['b1276', 'b0118']
# Create the GEM-PRO project
my_gempro = GEMPRO(gem_name=PROJECT, root_dir=ROOT_DIR, genes_list=LIST_OF_GENES, pdb_file_type='pdb')
"""
Explanation: Initialization
Set these three things:
ROOT_DIR
The directory where a folder named after your PROJECT will be created
PROJECT
Your project name
LIST_OF_GENES
Your list of gene IDs
A directory will be created in ROOT_DIR with your PROJECT name. The folders are organized like so:
```
ROOT_DIR
└── PROJECT
├── data # General storage for pipeline outputs
├── model # SBML and GEM-PRO models are stored here
├── genes # Per gene information
│ ├── <gene_id1> # Specific gene directory
│ │ └── protein
│ │ ├── sequences # Protein sequence files, alignments, etc.
│ │ └── structures # Protein structure files, calculations, etc.
│ └── <gene_id2>
│ └── protein
│ ├── sequences
│ └── structures
├── reactions # Per reaction information
│ └── <reaction_id1> # Specific reaction directory
│ └── complex
│ └── structures # Protein complex files
└── metabolites # Per metabolite information
└── <metabolite_id1> # Specific metabolite directory
└── chemical
└── structures # Metabolite 2D and 3D structure files
```
<div class="alert alert-info">**Note:** Methods for protein complexes and metabolites are still in development.</div>
End of explanation
"""
# UniProt mapping
my_gempro.uniprot_mapping_and_metadata(model_gene_source='ENSEMBLGENOME_ID')
print('Missing UniProt mapping: ', my_gempro.missing_uniprot_mapping)
my_gempro.df_uniprot_metadata.head()
# Set representative sequences
my_gempro.set_representative_sequence()
print('Missing a representative sequence: ', my_gempro.missing_representative_sequence)
my_gempro.df_representative_sequences.head()
"""
Explanation: Mapping gene ID --> sequence
First, we need to map these IDs to their protein sequences. There are 2 ID mapping services provided to do this - through KEGG or UniProt. The end goal is to map a UniProt ID to each ID, since there is a comprehensive mapping (and some useful APIs) between UniProt and the PDB.
<p><div class="alert alert-info">**Note:** You only need to map gene IDs using one service. However you can run both if some genes don't map in one service and do map in another!</div></p>
End of explanation
"""
# Mapping using the PDBe best_structures service
my_gempro.map_uniprot_to_pdb(seq_ident_cutoff=.3)
my_gempro.df_pdb_ranking.head()
# Mapping using BLAST
my_gempro.blast_seqs_to_pdb(all_genes=True, seq_ident_cutoff=.7, evalue=0.00001)
my_gempro.df_pdb_blast.head(2)
"""
Explanation: Mapping representative sequence --> structure
These are the ways to map sequence to structure:
Use the UniProt ID and their automatic mappings to the PDB
BLAST the sequence to the PDB
Make homology models or
Map to existing homology models
You can only utilize option #1 to map to PDBs if there is a mapped UniProt ID set in the representative sequence. If not, you'll have to BLAST your sequence to the PDB or make a homology model. You can also run both for maximum coverage.
End of explanation
"""
import pandas as pd
import os.path as op
# Creating manual mapping dictionary for ECOLI I-TASSER models
homology_models = '/home/nathan/projects_archive/homology_models/ECOLI/zhang/'
homology_models_df = pd.read_csv('/home/nathan/projects_archive/homology_models/ECOLI/zhang_data/160804-ZHANG_INFO.csv')
tmp = homology_models_df[['zhang_id','model_file','m_gene']].drop_duplicates()
tmp = tmp[pd.notnull(tmp.m_gene)]
homology_model_dict = {}
for i,r in tmp.iterrows():
homology_model_dict[r['m_gene']] = {r['zhang_id']: {'model_file':op.join(homology_models, r['model_file']),
'file_type':'pdb'}}
my_gempro.get_manual_homology_models(homology_model_dict)
# Creating manual mapping dictionary for ECOLI SUNPRO models
homology_models = '/home/nathan/projects_archive/homology_models/ECOLI/sunpro/'
homology_models_df = pd.read_csv('/home/nathan/projects_archive/homology_models/ECOLI/sunpro_data/160609-SUNPRO_INFO.csv')
tmp = homology_models_df[['sunpro_id','model_file','m_gene']].drop_duplicates()
tmp = tmp[pd.notnull(tmp.m_gene)]
homology_model_dict = {}
for i,r in tmp.iterrows():
homology_model_dict[r['m_gene']] = {r['sunpro_id']: {'model_file':op.join(homology_models, r['model_file']),
'file_type':'pdb'}}
my_gempro.get_manual_homology_models(homology_model_dict)
"""
Explanation: Homology models
Below, we are mapping to previously generated homology models for E. coli. If you are running this as a tutorial, they won't exist on your computer, so you can skip these steps.
End of explanation
"""
# Download all mapped PDBs and gather the metadata
my_gempro.pdb_downloader_and_metadata()
my_gempro.df_pdb_metadata.head(2)
# Set representative structures
my_gempro.set_representative_structure()
my_gempro.df_representative_structures.head()
"""
Explanation: Downloading and ranking structures
<div class="alert alert-warning">
**Warning:**
Downloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.
</div>
End of explanation
"""
# Requires EMBOSS "pepstats" program
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
# Install using:
# sudo apt-get install emboss
my_gempro.get_sequence_properties()
# Requires SCRATCH installation, replace path_to_scratch with own path to script
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
my_gempro.get_scratch_predictions(path_to_scratch='scratch',
results_dir=my_gempro.data_dir,
num_cores=4)
my_gempro.find_disulfide_bridges(representatives_only=False)
# Requires DSSP installation
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
my_gempro.get_dssp_annotations()
# Requires MSMS installation
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
my_gempro.get_msms_annotations()
"""
Explanation: Computing and storing protein properties
End of explanation
"""
# for g in my_gempro.genes_with_a_representative_sequence:
# g.protein.representative_sequence.feature_path = '/path/to/new/feature/file.gff'
"""
Explanation: Additional annotations
Loading feature files to the representative sequence
"Features" are currently loaded directly from UniProt, but if another feature file is available for each protein, it can be loaded manually.
End of explanation
"""
# Kyte-Doolittle scale for hydrophobicity
kd = { 'A': 1.8,'R':-4.5,'N':-3.5,'D':-3.5,'C': 2.5,
'Q':-3.5,'E':-3.5,'G':-0.4,'H':-3.2,'I': 4.5,
'L': 3.8,'K':-3.9,'M': 1.9,'F': 2.8,'P':-1.6,
'S':-0.8,'T':-0.7,'W':-0.9,'Y':-1.3,'V': 4.2 }
# Use Biopython to calculated hydrophobicity using a set sliding window length
from Bio.SeqUtils.ProtParam import ProteinAnalysis
window = 7
for g in my_gempro.genes_with_a_representative_sequence:
# Create a ProteinAnalysis object -- see http://biopython.org/wiki/ProtParam
my_seq = g.protein.representative_sequence.seq_str
analysed_seq = ProteinAnalysis(my_seq)
# Calculate scale
hydrophobicity = analysed_seq.protein_scale(param_dict=kd, window=window)
# Correct list length by prepending and appending "inf" (result needs to be same length as sequence)
for i in range(window//2):
hydrophobicity.insert(0, float("Inf"))
hydrophobicity.append(float("Inf"))
# Add new annotation to the representative sequence's "letter_annotations" dictionary
g.protein.representative_sequence.letter_annotations['hydrophobicity-kd'] = hydrophobicity
"""
Explanation: Adding more properties
Additional global or local properties can be loaded after loading the saved GEM-PRO.
Make sure to add 'seq_hydrophobicity-kd' to the list of columns to be returned later on!
Example with hydrophobicity
End of explanation
"""
# Printing all global protein properties
from pprint import pprint
# Only looking at 2 genes for now, remove [:2] to gather properties for all
for g in my_gempro.genes_with_a_representative_sequence[:2]:
repseq = g.protein.representative_sequence
repstruct = g.protein.representative_structure
repchain = g.protein.representative_chain
print('Gene: {}'.format(g.id))
print('Number of structures: {}'.format(g.protein.num_structures))
print('Representative sequence: {}'.format(repseq.id))
print('Representative structure: {}'.format(repstruct.id))
print('----------------------------------------------------------------')
print('Global properties of the representative sequence:')
pprint(repseq.annotations)
print('----------------------------------------------------------------')
print('Global properties of the representative structure:')
pprint(repstruct.chains.get_by_id(repchain).seq_record.annotations)
print('****************************************************************')
print('****************************************************************')
print('****************************************************************')
"""
Explanation: Global protein properties
Properties of the entire protein sequence/structure are stored in:
The representative_sequence annotations field
The representative_structure's representative chain SeqRecord
These properties describe aspects of the entire protein, such as its molecular weight, the percentage of amino acids in a particular secondary structure, etc.
End of explanation
"""
# Looking at all features
for g in my_gempro.genes_with_a_representative_sequence[:2]:
g.id
# UniProt features
[x for x in g.protein.representative_sequence.features]
# Catalytic site atlas features
for s in g.protein.structures:
if s.structure_file:
for c in s.mapped_chains:
if s.chains.get_by_id(c).seq_record:
if s.chains.get_by_id(c).seq_record.features:
[x for x in s.chains.get_by_id(c).seq_record.features]
metal_info = []
for g in my_gempro.genes:
for f in g.protein.representative_sequence.features:
if 'metal' in f.type.lower():
res_info = g.protein.get_residue_annotations(f.location.end, use_representatives=True)
res_info['gene_id'] = g.id
res_info['seq_id'] = g.protein.representative_sequence.id
res_info['struct_id'] = g.protein.representative_structure.id
res_info['chain_id'] = g.protein.representative_chain
metal_info.append(res_info)
cols = ['gene_id', 'seq_id', 'struct_id', 'chain_id',
'seq_residue', 'seq_resnum', 'struct_residue','struct_resnum',
'seq_SS-sspro','seq_SS-sspro8','seq_RSA-accpro','seq_RSA-accpro20',
'struct_SS-dssp','struct_RSA-dssp', 'struct_ASA-dssp',
'struct_PHI-dssp', 'struct_PSI-dssp', 'struct_CA_DEPTH-msms', 'struct_RES_DEPTH-msms']
pd.DataFrame.from_records(metal_info, columns=cols).set_index(['gene_id', 'seq_id', 'struct_id', 'chain_id', 'seq_resnum'])
"""
Explanation: Local protein properties
Properties of specific residues are stored in:
The representative_sequence's letter_annotations attribute
The representative_structure's representative chain SeqRecord
Specific sites, like metal or metabolite binding sites, can be found in the representative_sequence's features attribute. This information is retrieved from UniProt. The below examples extract features for the metal binding sites.
The properties related to those sites can be retrieved using the function get_residue_annotations.
UniProt contains more information than just "sites"
End of explanation
"""
for g in my_gempro.genes:
# Gather residue numbers
metal_binding_structure_residues = []
for f in g.protein.representative_sequence.features:
if 'metal' in f.type.lower():
res_info = g.protein.get_residue_annotations(f.location.end, use_representatives=True)
metal_binding_structure_residues.append(res_info['struct_resnum'])
print(metal_binding_structure_residues)
# Display structure
view = g.protein.representative_structure.view_structure()
g.protein.representative_structure.add_residues_highlight_to_nglview(view=view, structure_resnums=metal_binding_structure_residues)
view
"""
Explanation: Column definitions
General
gene_id: Gene ID used in GEM-PRO project
seq_id: Representative protein sequence ID
struct_id: Representative protein structure ID, with REP- prepended to it. 4 letter structure IDs are experimental structures from the PDB, others are homology models
chain_id: Representative chain ID in the representative structure
General -- residue specific
seq_resnum: Residue number of the amino acid in the representative sequence
site_name: Name of the feature as defined in UniProt
seq_residue: Amino acid in the representative sequence at the residue number
struct_residue: Amino acid in the representative structure at the residue number
struct_resnum: Residue number of the amino acid in the representative structure
Residue-level properties calculated or predicted from sequence:
seq_SS-sspro: Predicted secondary structure, 3 definitions (from the SCRATCH program)
seq_SS-sspro8: Predicted secondary structure, 8 definitions (SCRATCH)
seq_RSA-accpro: Predicted exposed (e) or buried (-) residue (SCRATCH)
seq_RSA-accpro20: Predicted exposed/buried, 0 to 100 scale (SCRATCH)
Residue-level properties calculated from structure:
struct_SS-dssp: Secondary structure (DSSP program)
struct_RSA-dssp: Relative solvent accessibility (DSSP)
struct_ASA-dssp: Solvent accessibility, absolute value (DSSP)
struct_PHI-dssp: Phi angle measure (DSSP)
struct_PSI-dssp: Psi angle measure (DSSP)
struct_RES_DEPTH-msms: Calculated residue depth averaged for all atoms in the residue (MSMS program)
struct_CA_DEPTH-msms: Calculated residue depth for the carbon alpha atom (MSMS)
Visualizing residues
End of explanation
"""
# Run all sequence to structure alignments
for g in my_gempro.genes:
for s in g.protein.structures:
g.protein.align_seqprop_to_structprop(seqprop=g.protein.representative_sequence, structprop=s)
metal_info_compared = []
for g in my_gempro.genes:
for f in g.protein.representative_sequence.features:
if 'metal' in f.type.lower():
for s in g.protein.structures:
for c in s.mapped_chains:
res_info = g.protein.get_residue_annotations(seq_resnum=f.location.end,
seqprop=g.protein.representative_sequence,
structprop=s, chain_id=c,
use_representatives=False)
res_info['gene_id'] = g.id
res_info['seq_id'] = g.protein.representative_sequence.id
res_info['struct_id'] = s.id
res_info['chain_id'] = c
metal_info_compared.append(res_info)
cols = ['gene_id', 'seq_id', 'struct_id', 'chain_id',
'seq_residue', 'seq_resnum', 'struct_residue','struct_resnum',
'seq_SS-sspro','seq_SS-sspro8','seq_RSA-accpro','seq_RSA-accpro20',
'struct_SS-dssp','struct_RSA-dssp', 'struct_ASA-dssp',
'struct_PHI-dssp', 'struct_PSI-dssp', 'struct_CA_DEPTH-msms', 'struct_RES_DEPTH-msms']
pd.DataFrame.from_records(metal_info_compared, columns=cols).sort_values(by=['seq_resnum','struct_id','chain_id']).set_index(['gene_id','seq_id','seq_resnum','seq_residue','struct_id'])
"""
Explanation: Comparing features in different structures of the same protein
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_object_raw.ipynb | bsd-3-clause | from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
"""
Explanation: .. _tut_raw_objects:
The :class:Raw <mne.io.Raw> data structure: continuous data
End of explanation
"""
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.RawFIF(data_path, preload=True, verbose=False)
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
"""
Explanation: Continuous data is stored in objects of type :class:Raw <mne.io.RawFIF>.
The core data structure is simply a 2D numpy array (channels × samples,
._data) combined with an :class:Info <mne.Info> object
(.info) (:ref:tut_info_objects.
The most common way to load continuous data is from a .fif file. For more
information on :ref:loading data from other formats <ch_convert>, or
creating it :ref:from scratch <tut_creating_data_structures>.
Loading continuous data
End of explanation
"""
print('Shape of data array:', raw._data.shape)
array_data = raw._data[0, :1000]
_ = plt.plot(array_data)
"""
Explanation: Information about the channels contained in the :class:Raw <mne.io.RawFIF>
object is contained in the :class:Info <mne.Info> attribute.
This is essentially a dictionary with a number of relevant fields (see
:ref:tut_info_objects).
Indexing data
There are two ways to access the data stored within :class:Raw
<mne.io.RawFIF> objects. One is by accessing the underlying data array, and
the other is to index the :class:Raw <mne.io.RawFIF> object directly.
To access the data array of :class:Raw <mne.io.Raw> objects, use the
_data attribute. Note that this is only present if preload==True.
End of explanation
"""
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
"""
Explanation: You can also pass an index directly to the :class:Raw <mne.io.RawFIF>
object. This will return an array of times, as well as the data representing
those timepoints. This may be used even if the data is not preloaded:
End of explanation
"""
# Pull all MEG gradiometer channels:
# Make sure to use .copy() or it will overwrite the data
meg_only = raw.copy().pick_types(meg=True)
eeg_only = raw.copy().pick_types(meg=False, eeg=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.copy().pick_types(meg='grad')
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.copy().pick_channels(pick_chans)
print(meg_only, eeg_only, grad_only, specific_chans, sep='\n')
"""
Explanation: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
End of explanation
"""
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
del eeg, meg, meg_only, grad_only, eeg_only, data, specific_chans
"""
Explanation: Notice the different scalings of these types
End of explanation
"""
raw = raw.crop(0, 50) # in seconds
print('New time range from', raw.times.min(), 's to', raw.times.max(), 's')
"""
Explanation: You can restrict the data to a specific time range
End of explanation
"""
nchan = raw.info['nchan']
raw = raw.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', nchan, 'to', raw.info['nchan'])
"""
Explanation: And drop channels by name
End of explanation
"""
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 40)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
"""
Explanation: Concatenating :class:Raw <mne.io.RawFIF> objects
:class:Raw <mne.io.RawFIF> objects can be concatenated in time by using the
:func:append <mne.io.RawFIF.append> function. For this to work, they must
have the same number of channels and their :class:Info
<mne.Info> structures should be compatible.
End of explanation
"""
|
adrianstaniec/deep-learning | 04_intro-to-tensorflow/intro_to_tensorflow.ipynb | mit | import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
"""
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
"""
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
"""
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
"""
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
arr = np.array(image_data)
arr = (arr - arr.min())/(arr.max() - arr.min())
arr = arr * 0.8 + 0.1
return arr.tolist()
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
print(train_features)
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
"""
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
"""
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
"""
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
"""
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
features = tf.placeholder(tf.float32, [None, features_count])
labels = tf.placeholder(tf.float32, [None, labels_count])
weights = tf.Variable(tf.truncated_normal([features_count, labels_count]))
biases = tf.Variable(tf.zeros([labels_count]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
"""
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
"""
results
# Change if you have memory restrictions
batch_size = 128
# Find the best parameters for each configuration
#results = []
#for epochs in [1,2,3,4,5]:
# for learning_rate in [0.8, 0.5, 0.1, 0.05, 0.01]:
epochs = 4
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
#print('LR: {}\t Epochs: {}\t Validation accuracy at {}'.format(learning_rate, epochs, validation_accuracy))
#results.append((learning_rate, epochs, validation_accuracy))
"""
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
"""
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
"""
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation
"""
|
tcstewar/testing_notebooks | Learning and Adjusting Tuning Curves.ipynb | gpl-2.0 | %matplotlib inline
import pylab # plotting
import seaborn # plotting
import numpy as np # math functions
import nengo # neural modelling
"""
Explanation: Learning and Adjusting Tuning Curves
This is a quick notebook just to sketch out some initial stages of looking at modelling Aaron Batista's data on training macaques to use BCIs, and how that interacts with the represented low-dimensional manifolds in M1.
First, we start with the required dependencies. They are all Python libraries that can be installed with pip (e.g. pip install nengo)
End of explanation
"""
model = nengo.Network()
with model:
m1 = nengo.Ensemble(n_neurons=500, dimensions=3)
"""
Explanation: Now we build our actual model. We start by defining M1. Even though I'll only be extracting 2 dimensions out of this neural activity, I assume that it's probably encoding more than that, so let's go with saying that it's a 3-dimensional manifold.
End of explanation
"""
with model:
pmc = nengo.Ensemble(n_neurons=500, dimensions=6)
# function to approximate
def starting_map(x):
return x[0], x[1], x[2]
c = nengo.Connection(pmc, m1, function=starting_map,
learning_rule_type=nengo.PES(learning_rate=1e-5))
"""
Explanation: M1 is gets its inputs from earlier motor areas, so let's add that in. We'll call it PMC (or maybe SMA), and we'll assume that it's doing a few different things, so let's arbitarily pick that it's a 6-dimensional manifold.
We then connect it to M1. When we do this, we can specify the relationship that we want between these manifolds and Nengo will find the synaptic connection weights that best approximate that mapping. This mapping can be a non-linear function (although the more non-linear it is, the less accurately it'll approximate that function). Here, we just do a linear function that grabs the first 3 dimensions from pmc and sends it to m1.
This is the connection that will actually end up being adjusted during learning, so we also define a learning rule. This is the PES rule, which is really just standard delta rule (i.e. a supervised learning rule on just that one set of connections, which ends up being backprop without backpropagation).
End of explanation
"""
with model:
stim = nengo.Node(nengo.processes.WhiteSignal(period=500, high=5), size_out=6)
nengo.Connection(stim, pmc)
"""
Explanation: We should have some actual input to our system. Let's just do a random band-limited white noise signal, with a maximum frequency of 5Hz. Note that this input is a 6-dimensional input (i.e. it's in the low-D manifold space, not in the 500-D neurons space). The first 2 dimensions of this we will consider to be our target X,Y location that we want to decode out of the M1 representation.
Note that because of how we set up our connection above, the initial neuron model will be one that does send that information (the values we want to decode) to M1.
End of explanation
"""
class BCINode(nengo.Node):
def __init__(self, ensemble, dimensions, seed=1):
ensemble.seed = seed
self.decoder = self.get_decoder(ensemble)[:dimensions]
super(BCINode, self).__init__(self.decode, size_in=ensemble.n_neurons, size_out=dimensions)
# defines the behaviour of this Node in the running model
def decode(self, t, x):
return np.dot(self.decoder, x)
# use nengo to compute the ideal decoder for this neural population
def get_decoder(self, ens):
assert ens.seed is not None
net = nengo.Network(add_to_container=False)
net.ensembles.append(ens)
with net:
c = nengo.Connection(ens, ens)
sim = nengo.Simulator(net, progress_bar=False)
return sim.data[c].weights
with model:
bci = BCINode(m1, dimensions=2)
nengo.Connection(m1.neurons, bci)
"""
Explanation: Now we define our BCI node. This will take in spiking data from an ensemble of neurons, and apply some linear transform on the spikes, projecting it down into some smaller space. The begin with, this calls into nengo to compute the ideal default decoding. However, if we change this self.decoder, then we change the mapping that the model has to learn.
Note: this is using a few different rather esoteric Nengo tricks, so probably isn't all that readable. But the final result is something that has as input spikes from N neurons, and as output has a 2-element vector that is formed by linearly combining the spike trains. By default it's using the same linear combination that the full model is using (i.e. as if we were able to find the actual mapping that the macaque is using), but we'll change that to a different mapping that it has to learn.
End of explanation
"""
bci.decoder = np.vstack([bci.decoder[1], bci.decoder[0]])
"""
Explanation: If we left it like this, then the way we're decoding the spikes in the BCI is exactly what the neural model is already doing, so it would give perfect behaviour instantly and there'd be nothing to learn.
Let's give it something to learn by swapping the 2 dimensions being decoded. That is, we're staying in manifold, but swapping dimension 1 and dimension 2
End of explanation
"""
with model:
# population representing the error signal
error = nengo.Ensemble(n_neurons=500, dimensions=3)
# feedback from the BCI as to where the cursor actually went to
nengo.Connection(bci, error[:2], transform=1)
# minus the actual desired location
nengo.Connection(pmc[:2], error[:2], transform=-1)
# use this difference to drive the learning from pmc to m1
nengo.Connection(error, c.learning_rule, transform=[[0,1,0],[1,0,0],[0,0,1]])
"""
Explanation: Now we need to give the system a learning signal. The solution provided here is cheating a fair bit, in that it already knows which direction to change things to improve the results. A more complete solution would require learning to characterize the relationship between the observed change in cursor position and the desired change. This is exactly the sort of learning we've done elsewhere in the adaptive Jacobian kinematics learning models (such as http://rspb.royalsocietypublishing.org/content/283/1843/20162134) but for this demonstration I'm cheating and skipping that part.
End of explanation
"""
with model:
p_out = nengo.Probe(bci) # the output from the BCI
p_stim = nengo.Probe(stim[:2]) # the desired target location
p_spikes = nengo.Probe(m1.neurons) # the spike data in m1
"""
Explanation: Finally, we mark particular data to be recorded during the model run.
End of explanation
"""
sim = nengo.Simulator(model)
sim.run(500)
"""
Explanation: Now we run the simulation for 500 seconds.
End of explanation
"""
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(0,1)
pylab.legend(loc='best')
pylab.show()
"""
Explanation: Let's see what it's doing at a behavioural level. First, we plot behaviour in the first second
End of explanation
"""
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(100, 101)
pylab.legend(loc='best')
pylab.show()
"""
Explanation: As expected, the initial output of the system is exactly backwards (the first dimension and the second dimension are swapped).
What happens after 100 seconds of learning?
End of explanation
"""
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(200, 201)
pylab.legend(loc='best')
pylab.show()
"""
Explanation: Things have changed, but it's still rather wrong.
What happens after 200 seconds of learning?
End of explanation
"""
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(499, 500)
pylab.legend(loc='best')
pylab.show()
"""
Explanation: Getting better.
What happens after 500 seconds of learning?
End of explanation
"""
def plot_tuning(index, t_start=0, t_end=10, cmap='Reds'):
times = sim.trange()
spikes = sim.data[p_spikes][:,index]
spikes = np.where(times>t_end, 0, spikes)
spikes = np.where(times<t_start, 0, spikes)
value = sim.data[p_stim]
v = value[np.where(spikes>0)]
seaborn.kdeplot(v[:,0], v[:,1], shade=True, shade_lowest=False, cmap=cmap, alpha=1.0)
"""
Explanation: It has successfully learned the new mapping.
Tuning curves
The whole point of all this was to see what's happening to the tuning curves.
To plot these, the simplest thing to do is just do a density plot of what target x,y locations the neurons spike for at different times in the experiment.
End of explanation
"""
index = 2
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
"""
Explanation: Here is the tuning curve for neuron #2, with the initial tuning curve in red (using data from t=0 to t=100) and the final tuning curve in blue (using data from t=400 to t=500).
End of explanation
"""
index = 3
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
"""
Explanation: Here's a neuron that didn't change much
End of explanation
"""
index = 5
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
index = 6
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
"""
Explanation: And some more neurons.
End of explanation
"""
|
tacaswell/conda-prescriptions | scripts/notebooks/DAG build and runtime requirements for NSLS-II stack.ipynb | bsd-3-clause | latest_tagged = defaultdict(dict)
for lib_name, all_versions in all_recipes.items():
versions = sorted(all_versions.keys())
if len(versions) == 1:
version = versions[0]
else:
if 'dev' in versions:
versions.remove('dev')
version = versions[-1]
latest_tagged[lib_name][version] = all_versions[version]
for recipe, version in sorted(latest_tagged.items()):
print(recipe, sorted(version.keys()))
dev_only = defaultdict(dict)
for lib_name, all_versions in all_recipes.items():
if 'dev' in all_versions.keys():
dev_only[lib_name] = all_versions['dev']
print(sorted(dev_only.keys()))
import networkx as nx
def add_requirements(graph, requirements_list, target_lib):
graph.add_node(target_lib)
for req in requirements_list:
graph.add_node(req)
graph.add_edge(req, target_lib)
fig, ax = plt.subplots(ncols=2, nrows=len(dev_only), figsize=(10,4*len(dev_only)))
all_runs_dev_only = nx.DiGraph()
all_builds_dev_only = nx.DiGraph()
for row, (lib, meta) in enumerate(sorted(dev_only.items())):
run = nx.DiGraph()
build = nx.DiGraph()
build_reqs = meta['requirements']['build']
run_reqs = meta['requirements']['run']
add_requirements(build, build_reqs, lib)
add_requirements(run, run_reqs, lib)
add_requirements(all_builds_dev_only, build_reqs, lib)
add_requirements(all_runs_dev_only, run_reqs, lib)
build_ax = ax[row][0]
row_ax = ax[row][1]
nx.draw_networkx(build, ax=build_ax)
nx.draw_networkx(run, ax=row_ax)
build_ax.set_title("%s Build requirements" % lib)
row_ax.set_title("%s Run requirements" % lib)
nx.is_directed_acyclic_graph(all_runs_dev_only)
nx.is_directed_acyclic_graph(all_builds_dev_only)
"""
Explanation: Looks good! We should separate these into latest_tagged and dev_only
End of explanation
"""
fig, ax = plt.subplots(ncols=2, nrows=len(latest_tagged), figsize=(10,4*len(latest_tagged)))
all_runs_latest_tagged = nx.DiGraph()
all_builds_latest_tagged = nx.DiGraph()
for row, (lib, version) in enumerate(sorted(latest_tagged.items())):
meta = list(version.values())[0]
run = nx.DiGraph()
build = nx.DiGraph()
build_ax = ax[row][0]
row_ax = ax[row][1]
reqs = meta.get('requirements')
build_ax.set_title("%s Build requirements" % lib)
row_ax.set_title("%s Run requirements" % lib)
if reqs:
build_reqs = reqs.get('build')
run_reqs = reqs.get('run')
if build_reqs:
add_requirements(build, build_reqs, lib)
add_requirements(all_builds_latest_tagged, build_reqs, lib)
nx.draw_networkx(build, ax=build_ax)
if run_reqs:
add_requirements(run, run_reqs, lib)
add_requirements(all_runs_latest_tagged, run_reqs, lib)
nx.draw_networkx(run, ax=row_ax)
nx.is_directed_acyclic_graph(all_builds_latest_tagged)
nx.is_directed_acyclic_graph(all_runs_latest_tagged)
fig, ax = plt.subplots(figsize=(50,50))
nx.draw_networkx(all_builds_dev_only, ax=ax)
ax.set_title("All build requirements, dev recipes only")
fig, ax = plt.subplots(figsize=(50,50))
nx.draw_networkx(all_runs_dev_only, ax=ax)
ax.set_title("All runtime requirements, dev recipes only")
fig, ax = plt.subplots(figsize=(50,50))
nx.draw_networkx(all_builds_latest_tagged, ax=ax)
ax.set_title("All build requirements, latest tagged recipes")
fig, ax = plt.subplots(figsize=(50,50))
nx.draw_networkx(all_runs_latest_tagged, ax=ax)
ax.set_title("All runtime requirements, latest tagged recipes")
sorted(all_runs_latest_tagged.nodes())
all_runs_latest_tagged.subgraph??
g = all_runs_latest_tagged.subgraph('dataportal')
nx.draw_networkx(g)
all_runs_latest_tagged.edges()
all_runs_latest_tagged['clint']
"""
Explanation: Do the same for latest_tagged + dev
End of explanation
"""
|
berquist/ipython_notebooks_for_qc | notebooks/Frequency Calculations.ipynb | mpl-2.0 | import numpy as np
with open('qm_files/drop_0375_0qm_0mm.out') as f:
contents_qmoutput = f.read()
"""
Explanation: Frequency Calculations
End of explanation
"""
contents_splitlines = contents_qmoutput.splitlines()
contents_splitlines[340:370]
"""
Explanation: Advanced: calculate frequencies directly from the mass-weighted Hessian
How can you prove to yourself that the frequencies printed at the end of a QM output are correct? We can take the Hessian (mass-weighted or otherwise), which is the second derivative of the energy with respect to nuclear displacements,
$H_{ij} = \frac{\partial^{2} E}{\partial x_{i} \partial x_{j}}$,
diagonalize it, and multiply the eigenvalues by some constants to obtain vibrational (normal mode) frequencies in wavenumbers.
A useful resource for understanding this is given by http://sirius.chem.vt.edu/wiki/doku.php?id=crawdad:programming:project2, which most of this section is derived from.
First, we need to locate the Hessian. If we can only find one that isn't mass-weighted, we'll need to do that too.
End of explanation
"""
# lists are iterable, but not `iterators` themselves.
contents_iter = iter(contents_splitlines)
for line in contents_iter:
if 'Hessian of the SCF Energy' in line:
while 'Gradient time:' not in line:
print(line)
line = next(contents_iter)
"""
Explanation: There's a line Hessian of the SCF Energy, which sounds like it might not be mass-weighted. First, let's try and extract it.
End of explanation
"""
N = 3
hessian_scf = np.zeros(shape=(3*N, 3*N))
"""
Explanation: The Hessian matrix should have $3N * 3N$ elements, where $N$ is the number of atoms; this is because each atom contributes 3 Cartesian coordinates, and it is two-dimensional because of the double derivative. By this logic, the gradient may be represented by a length $3N$ vector, and cubic force constants by an array of shape $[3N, 3N, 3N]$.
Now that we can extract only the lines that contain the Hessian information, let's make a NumPy array that will hold the final results.
End of explanation
"""
ncols_block = 6
"""
Explanation: Here comes the tricky Python bits. For matrices, Q-Chem prints a maximum of 6 columns per block; this info is necessary for keeping track of where we are during parsing.
End of explanation
"""
contents_iter = iter(contents_splitlines)
for line in contents_iter:
if 'Hessian of the SCF Energy' in line:
# skip the line containing the header
line = next(contents_iter)
# keep track of how many "columns" there are left; better than searching
# for the gradient line
ncols_remaining = 3*N
while ncols_remaining > 0:
# this must be the column header
if len(line.split()) == min(ncols_block, ncols_remaining):
print(line)
ncols_remaining -= ncols_block
sline = line.split()
colmin, colmax = int(sline[0]) - 1, int(sline[-1])
line = next(contents_iter)
# iterate over the rows
for row in range(3*N):
print(line)
line = next(contents_iter)
"""
Explanation: Here's the machinery that will do the actual iteration over the matrix, leaving out the part where the results are stored.
End of explanation
"""
contents_iter = iter(contents_splitlines)
for line in contents_iter:
if 'Hessian of the SCF Energy' in line:
# skip the line containing the header
line = next(contents_iter)
# keep track of how many "columns" there are left; better than searching
# for the gradient line
ncols_remaining = 3*N
while ncols_remaining > 0:
# this must be the column header
if len(line.split()) == min(ncols_block, ncols_remaining):
ncols_remaining -= ncols_block
sline = line.split()
colmin, colmax = int(sline[0]) - 1, int(sline[-1])
line = next(contents_iter)
# iterate over the rows
for row in range(3*N):
sline = line.split()
rowidx = int(sline[0]) - 1
hessian_scf[rowidx, colmin:colmax] = list(map(float, sline[1:]))
line = next(contents_iter)
hessian_scf
"""
Explanation: An important note: the Hessian (and all other matrices in the output) start from 1, but Python indices for lists and arrays start at 0; this is why int() - 1 shows up. But why not for colmax? Because of how Python slicing for lists and arrays doesn't include the ending number. (Elaborate?)
End of explanation
"""
contents_iter = iter(contents_splitlines)
for line in contents_iter:
if 'Zero point vibrational energy:' in line:
line = next(contents_iter)
line = next(contents_iter)
while 'Molecular Mass:' not in line:
print(line.split())
line = next(contents_iter)
masses = []
contents_iter = iter(contents_splitlines)
for line in contents_iter:
if 'Zero point vibrational energy:' in line:
line = next(contents_iter)
line = next(contents_iter)
while 'Molecular Mass:' not in line:
masses.append(float(line.split()[-1]))
line = next(contents_iter)
masses
"""
Explanation: The SCF Hessian has successfully been parsed from the output file and stored in a NumPy array. Since it needs to be mass-weighted, we need the masses of each atom. They're printed at the very end of the output file, and extracting them should be easier than for the Hessian.
End of explanation
"""
# First, make a copy of the original Hessian array so we don't modify that one.
hessian_mass_weighted = hessian_scf.copy()
# We know the dimension a priori to be 3*N, but what if we don't? Use an array attribute!
shape = hessian_mass_weighted.shape
# shape is a tuple, here we "unpack" it.
nrows, ncols = shape
import math
for i in range(nrows):
for j in range(ncols):
_denom = math.sqrt(masses[i // 3] * masses[j // 3])
hessian_mass_weighted[i, j] /= _denom
hessian_mass_weighted
"""
Explanation: Hopefully alarm bells are going off in your head right about now. Units! What units are we working in? Units units units!
The masses above are in amu (https://en.wikipedia.org/wiki/Atomic_mass_unit).
The SCF Hessian is certainly in atomic units (https://en.wikipedia.org/wiki/Atomic_units). Internally, all quantum chemistry programs work in atomic units. From Wikipedia:
This article deals with Hartree atomic units, where the numerical values of the following four fundamental physical constants are all unity by definition:
Electron mass $m_\text{e}$;
Elementary charge $e$;
Reduced Planck's constant $\hbar = h/(2 \pi)$;
Coulomb's constant $k_\text{e} = 1/(4 \pi \epsilon_0)$.
Since the Hessian is the double derivative of the energy with respect to nuclear diplacements (which can be considered positions or lengths), it will have units of $[\textrm{energy}][\textrm{length}]^{-2}$, which in atomic units is $E_{\textrm{h}}/a_{0}^{2}$ (hartree per bohr squared).
Now to mass-weight the SCF Hessian. The ordering along each dimension of the matrix is first atom x-coord, first atom y-coord, first atom z-coord, second atom x-coord, and so on. This means that every three columns or rows, we switch to a new atom. We'll be able to use integer division (// 3) to accomplish this cleanly.
End of explanation
"""
import scipy.linalg as spl
eigvals, eigvecs = spl.eig(hessian_mass_weighted)
print(eigvals)
"""
Explanation: We're ready to diagonalize the mass-weighted Hessian. If $\textrm{F}^{M}$ is the mass-weighted Hessian matrix, then the eigensystem defined by
$\textrm{F}^{M} \textbf{L} = \textbf{L} \Lambda$
results in the eigenvectors $\textbf{L}$, which are the normal modes of the system (the displacement vectors for each vibration, orthogonal to each other), and the eigenvalues $\Lambda$, which are the frequencies associated with each normal mode.
End of explanation
"""
bohr2m = 0.529177249e-10
hartree2joule = 4.35974434e-18
speed_of_light = 299792458
avogadro = 6.0221413e+23
vib_constant = math.sqrt((avogadro*hartree2joule*1000)/(bohr2m*bohr2m))/(2*math.pi*speed_of_light*100)
vib_constant
"""
Explanation: The Hessian is Hermitian, so we could use spl.eigh, and not return any complex values, but they're useful for illustrating a point to be made later on.
The eigenvalues of the mass-weighted Hessian will have the same units as the mass-weighted Hessian itself, which are ($[\textrm{energy}][\textrm{length}]^{-2}[\textrm{mass}]^{-1}$), or $\frac{E_{\textrm{h}}}{a_{0}^{2}m_{\textrm{e}}}$. These need to be converted to wavenumbers.
I'll save everyone the trouble of doing this conversion; I've done it once and that was enough.
End of explanation
"""
vibfreqs = np.sqrt(eigvals) * vib_constant
vibfreqs
"""
Explanation: The vibrational frequencies are related to the square roots of the eigenvalues, multiplied by the above constant:
$\omega_{i} = \textrm{constant} \times \sqrt{\lambda_{i}}$
End of explanation
"""
vibfreqs_calculated = vibfreqs[:(3*N)-5].real
vibfreqs_calculated
"""
Explanation: Interesting! Some of our frequencies are imaginary, which NumPy handles properly (math.sqrt(-1.0) would throw an error). What does this mean?
The eigenvalues of the full molecular Hessian contain not just vibrational frequencies, but those corresponding to rotational and translational motion (though one would want to convert those to periods from frequencies). For non-linear molecules, this results in $3N-6$ vibrations, after subtracting out the 3 translations and 3 rotations. For linear molecules, this is $3N-5$.
Imaginary frequencies would normally mean we aren't at a stationary point on the potential energy surface (PES), but notice that these imaginary frequencies will be removed once we take into account translational and rotational motion. They could be zeroed out completely if one projects out translations and rotations from the Hessian, which we won't do for now.
End of explanation
"""
vibfreqs_printed = [621.29, 1410.25, 2498.02]
vibfreqs_printed.reverse()
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
xticks = range(len(vibfreqs_printed))
ax.plot(xticks, vibfreqs_calculated[:-1], label='calculated', marker='o')
ax.plot(xticks, vibfreqs_printed, label='printed', marker='o')
ax.set_ylabel(r'frequency ($\mathrm{cm}^{-1}$)')
ax.legend(loc='best', fancybox=True)
"""
Explanation: How do the frequencies we've calculated compare with those printed in the output file?
End of explanation
"""
vibfreqs_calculated[:-1] - vibfreqs_printed
"""
Explanation: The plot doesn't tell us very much, since the differences are all < 1 $\textrm{cm}^{-1}$:
End of explanation
"""
|
rabernat/pyqg | docs/examples/linear_stability.ipynb | mit | import numpy as np
from numpy import pi
import matplotlib.pyplot as plt
%matplotlib inline
import pyqg
m = pyqg.LayeredModel(nx=256, nz = 2, U = [.01, -.01], V = [0., 0.], H = [1., 1.],
L=2*pi,beta=1.5, rd=1./20., rek=0.05, f=1.,delta=1.)
"""
Explanation: Built-in linear stability analysis
End of explanation
"""
evals,evecs = m.stability_analysis()
"""
Explanation: To perform linear stability analysis, we simply call pyqg's built-in method stability_analysis:
End of explanation
"""
evals = np.fft.fftshift(evals.imag,axes=(0,))
k,l = m.k*m.radii[1], np.fft.fftshift(m.l,axes=(0,))*m.radii[1]
"""
Explanation: The eigenvalues are stored in omg, and the eigenctors in evec. For plotting purposes, we use fftshift to reorder the entries
End of explanation
"""
argmax = evals[m.Ny/2,:].argmax()
evec = np.fft.fftshift(evecs,axes=(1))[:,m.Ny/2,argmax]
kmax = k[m.Ny/2,argmax]
x = np.linspace(0,4.*pi/kmax,100)
mag, phase = np.abs(evec), np.arctan2(evec.imag,evec.real)
"""
Explanation: It is also useful to analyze the fasted-growing mode:
End of explanation
"""
evals_fric, evecs_fric = m.stability_analysis(bottom_friction=True)
evals_fric = np.fft.fftshift(evals_fric.imag,axes=(0,))
argmax = evals_fric[m.Ny/2,:].argmax()
evec_fric = np.fft.fftshift(evecs_fric,axes=(1))[:,m.Ny/2,argmax]
kmax_fric = k[m.Ny/2,argmax]
mag_fric, phase_fric = np.abs(evec_fric), np.arctan2(evec_fric.imag,evec_fric.real)
"""
Explanation: By default, the stability analysis above is performed without bottom friction, but the stability method also supports bottom friction:
End of explanation
"""
plt.figure(figsize=(14,4))
plt.subplot(121)
plt.contour(k,l,evals,colors='k')
plt.pcolormesh(k,l,evals,cmap='Blues')
plt.colorbar()
plt.xlim(0,2.); plt.ylim(-2.,2.)
plt.clim([0.,.1])
plt.xlabel(r'$k \, L_d$'); plt.ylabel(r'$l \, L_d$')
plt.title('without bottom friction')
plt.subplot(122)
plt.contour(k,l,evals_fric,colors='k')
plt.pcolormesh(k,l,evals_fric,cmap='Blues')
plt.colorbar()
plt.xlim(0,2.); plt.ylim(-2.,2.)
plt.clim([0.,.1])
plt.xlabel(r'$k \, L_d$'); plt.ylabel(r'$l \, L_d$')
plt.title('with bottom friction')
plt.figure(figsize=(8,4))
plt.plot(k[m.Ny/2,:],evals[m.Ny/2,:],'b',label='without bottom friction')
plt.plot(k[m.Ny/2,:],evals_fric[m.Ny/2,:],'b--',label='with bottom friction')
plt.xlim(0.,2.)
plt.legend()
plt.xlabel(r'$k\,L_d$')
plt.ylabel(r'Growth rate')
"""
Explanation: Plotting growth rates
End of explanation
"""
plt.figure(figsize=(12,5))
plt.plot(x,mag[0]*np.cos(kmax*x + phase[0]),'b',label='Layer 1')
plt.plot(x,mag[1]*np.cos(kmax*x + phase[1]),'g',label='Layer 2')
plt.plot(x,mag_fric[0]*np.cos(kmax_fric*x + phase_fric[0]),'b--')
plt.plot(x,mag_fric[1]*np.cos(kmax_fric*x + phase_fric[1]),'g--')
plt.legend(loc=8)
plt.xlabel(r'$x/L_d$'); plt.ylabel(r'$y/L_d$')
"""
Explanation: Plotting the wavestructure of the most unstable modes
End of explanation
"""
|
gabll/RomeaJam | Traffik_EDA.ipynb | gpl-3.0 | def get_status(dt, category=None):
"""returns road status given specific datetime"""
if category:
return db.session.query(RoadStatus).filter(RoadStatus.timestamp > dt.strftime('%s')).\
filter(RoadStatus.timestamp < (dt+timedelta(0,60)).strftime('%s')).\
filter(RoadStatus.category == category).all()
else:
return db.session.query(RoadStatus).filter(RoadStatus.timestamp > dt.strftime('%s')).\
filter(RoadStatus.timestamp < (dt+timedelta(0,60)).strftime('%s')).all()
def get_segments(dt, category=None):
"""prints segment statuses given a specific datetime"""
for status in db.session.query(SegmentStatus).filter(SegmentStatus.timestamp > dt.strftime('%s')).\
filter(SegmentStatus.timestamp < (dt+timedelta(0,60)).strftime('%s')):
if status.segment.category == category or category == None:
print status.segment.category, status.packing_index
def printall(dt, category=None):
print get_status(dt, category)
get_segments(dt, category)
printall(datetime(2016,8,16,18,40)) #accident
printall(datetime(2016,8,16,10,43), category='Arrive')
printall(datetime(2016,8,15,19,39), category='Leave')
printall(datetime(2016,8,12,19,20), category='Leave')
printall(datetime(2016,8,12,19,20), 'Leave')
printall(datetime(2016,8,13,21,15), 'Leave') #no traffic
printall(datetime(2016,8,13,9,47), 'Arrive') #slow down
printall(datetime(2016,8,11,12,38), 'Arrive') #no traffic
printall(datetime(2016,8,11,11,20), 'Arrive')
printall(datetime(2016,8,9,8,35), 'Arrive')
printall(datetime(2016,8,7,20,0)) #serious accident
"""
Explanation: Data validation
Let's take some RoadStatus and compare them with traffic segnalations from a popular Facebook group
End of explanation
"""
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Exploratory Data Analysis
End of explanation
"""
qs = db.session.query(SegmentStatus).join(Segment).\
filter(SegmentStatus.timestamp > datetime(2016,8,4,0,0).strftime('%s')).\
filter(SegmentStatus.timestamp < datetime(2016,8,16,0,0).strftime('%s'))
ds = pd.read_sql(qs.statement, qs.session.bind)
ds.set_index('id', inplace=True)
ds['timestamp'] = pd.to_datetime(ds['timestamp'],unit='s')
ds.head()
#Let's check how many segments can have a packing index <> 0 in the same timestamp/category
ds = ds[(ds['packing_index'] > 0)]
ds1 = pd.DataFrame(ds['packing_index'].groupby([ds.road_status_id]).count())
ds1 = pd.DataFrame(ds1['packing_index'].groupby([ds1.packing_index]).count())
ds1.head()
"""
Explanation: SegmentStatus dataframe
End of explanation
"""
qr = db.session.query(RoadStatus).filter(RoadStatus.timestamp > datetime(2016,8,4,0,0).strftime('%s')).\
filter(RoadStatus.timestamp < datetime(2016,8,16,0,0).strftime('%s'))
dr = pd.read_sql(qr.statement, qr.session.bind)
dr.set_index('id', inplace=True)
dr['timestamp'] = pd.to_datetime(dr['timestamp'],unit='s')
dr.sort([('packing_index')], ascending=False).head()
dr.dtypes
dr.describe()
plt.figure(figsize=(18,5))
plt.subplot(1, 3, 1)
plt.xlabel('packing index')
plt.boxplot(dr[dr.packing_index>0].packing_index.reset_index()['packing_index'], showmeans=True, showfliers=True)
plt.show()
time = pd.DatetimeIndex(dr.timestamp)
dr_plt = dr.groupby([time.hour]).mean()
dr_plt.reset_index(inplace=True)
fig = plt.figure(figsize=(15,5))
ax = plt.gca()
dr_plt.plot(x='index', y='packing_index', ax=ax)
plt.title("Hourly average packing_index")
plt.ylabel('Packing index')
plt.xlabel('Hour')
ax.set_xticks(range(23))
plt.show()
time = pd.DatetimeIndex(dr.timestamp)
dr_plt = dr.groupby([time.weekday]).mean()
dr_plt.reset_index(inplace=True)
dayDict = {0:'Mon', 1:'Tue', 2:'Wed', 3:'Thu', 4:'Fri', 5:'Sat', 6:'Sun'}
def f(x):
daylabel = dayDict[x]
return daylabel
dr_plt['daylabel'] = dr_plt['index'].apply(f)
fig = plt.figure(figsize=(15,5))
ax = plt.gca()
dr_plt.plot(x='daylabel', y='packing_index', ax=ax)
plt.title("average packing_index per weekday")
plt.ylabel('Packing index')
plt.xlabel('Day')
ax.set_xticks(range(6))
plt.show()
time = pd.DatetimeIndex(dr.timestamp)
dr_plt = dr.groupby([time.day]).mean()
dr_plt.reset_index(inplace=True)
fig = plt.figure(figsize=(15,5))
ax = plt.gca()
dr_plt.plot(x='index', y='packing_index', ax=ax)
plt.title("average packing_index per day")
plt.ylabel('Packing index')
plt.xlabel('Day (August 2016)')
ax.set_xticks(range(18))
plt.show()
"""
Explanation: It's likely that there is a sort of "snake effect" in the data, i.e. the traffic flows and there aren't jams at the same timestamp in all the segments of the road. Because of this, I will introduce a "snake_parameter" in the RoadStatus class, so the packing_index will be the result of the average of the first snake_parameter segments ordered by packing index desc
RoadStatus dataframe
End of explanation
"""
qj = db.session.query(Jam).filter(Jam.timestamp > datetime(2016,8,4,0,0).strftime('%s')).\
filter(Jam.timestamp < datetime(2016,8,16,0,0).strftime('%s'))
dj = pd.read_sql(qj.statement, qj.session.bind)
dj.set_index('id', inplace=True)
dj['timestamp'] = pd.to_datetime(dj['timestamp'],unit='s')
dj.head()
time = pd.DatetimeIndex(dj.timestamp)
dj['day']=time.day
dj['hour']=time.hour
dj_time = dj.groupby([dj.day, dj.hour, dj.startLongitude, dj.endLongitude, dj.startLatitude, dj.endLatitude, dj.street, dj.severity, dj.color, dj.source, dj.direction]).count()
print 'Average traffic duration: %.2f min' % dj_time['timestamp'].mean()
dj_dur = pd.DataFrame(dj_time['timestamp'])
dj_dur.reset_index(inplace=True)
dj_dur = dj_dur[['hour', 'timestamp']]
dj_dur.columns=['hour', 'duration']
dj_dur = dj_dur.groupby([dj_dur.hour]).mean()
dj_dur.reset_index(inplace=True)
dj_dur.head()
fig = plt.figure(figsize=(15,5))
ax = plt.gca()
dj_dur.plot(x='hour', y='duration', ax=ax)
plt.title("Average jam duration per hour")
plt.ylabel('Duration [min]')
plt.xlabel('Hour')
ax.set_xticks(range(23))
plt.show()
pd.scatter_matrix(dr, alpha=0.2, figsize=(18, 18), diagonal='kde')
"""
Explanation: Jam dataframe
End of explanation
"""
|
SSDS-Croatia/SSDS-2017 | Day-2/segmentation/semantic_segmentation_clean.ipynb | mit | %matplotlib inline
import time
from os.path import join
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import utils
from data import Dataset
tf.set_random_seed(31415)
tf.logging.set_verbosity(tf.logging.ERROR)
plt.rcParams["figure.figsize"] = (15, 5)
"""
Explanation: Semantic Segmentation
In this exercise we will train an end-to-end convolutional neural network for semantic segmentation.
The goal of semantic segmentation is to classify the image on the pixel level. For each pixel
we want to determine the class of the object to which it belongs. This is different from image classification
which classifies an image as a whole and doesn't tell us the location of the objects. This is why semantic segmentation goes into the category of structured prediction problems. It answers on both the 'what' and 'where' questions while classifcation tells us only 'what'. By classifying each pixel we are infering the structure of the whole scene. Typical examples of input image and target labels for this problem are shown below.
Input image | Target image
-|-
|
|
1. Cityscapes dataset
Cityscapes dataset contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations. Dataset contains 2975 training and 500 validation images of size 2048x1024. The test set of 1000 images is evaluated on the server and benchmark is available here. Here we will use downsampled images of size 384x160. The original dataset has 19 classes but we lowered that to 7 by uniting similar classes into broader categories. This makes sense due to low visibility of very small objects in downsampled images. We also have ignore class which we need to ignore during training because those pixels don't belong to any class.
Download the prepared dataset here and extract it to the current directory.
ID | Class | Color
-|-|-
0 | road | purple
1 | building | grey
2 | infrastructure | yellow
3 | nature | green
4 | sky | light blue
5 | person | red
6 | vehicle | dark blue
7 | ignore | black
2. Building the graph
Let's begin by importing all the modules and setting the fixed random seed.
End of explanation
"""
batch_size = 10
num_classes = Dataset.num_classes
# create the Dataset for training and validation
train_data = Dataset('train', batch_size)
val_data = Dataset('val', batch_size, shuffle=False)
print('Train shape:', train_data.x.shape)
print('Validation shape:', val_data.x.shape)
#print('mean = ', train_data.x.mean((0,1,2)))
#print('std = ', train_data.x.std((0,1,2)))
"""
Explanation: Dataset
The Dataset class implements an iterator which returns the next batch data in each iteration. Data is already normalized to have zero mean and unit variance. The iteration is terminated when we reach the end of the dataset (one epoch).
End of explanation
"""
# store the input image dimensions
height = train_data.height
width = train_data.width
channels = train_data.channels
# create placeholders for inputs
def build_inputs():
...
"""
Explanation: Inputs
First, we will create input placeholders for Tensorflow computational graph of the model. For a supervised learning model, we need to declare placeholders which will hold input images (x) and target labels (y) of the mini-batches as we feed them to the network.
End of explanation
"""
# helper function which applies conv2d + ReLU with filter size k
def conv(x, num_maps, k=3):
...
# helper function for 2x2 max pooling with stride=2
def pool(x):
...
# this functions takes the input placeholder and the number of classes, builds the model and returns the logits
def build_model(x, num_classes):
...
"""
Explanation: Model
Now we can define the computational graph. Here we will heavily use tf.layers high level API which handles tf.Variable creation for us. The main difference here compared to the classification model is that the network is going to be fully convolutional without any fully connected layers. Brief sketch of the model we are going to define is given below.
conv3x3(32) -> 4 x (pool2x2 -> conv3x3(64) -> conv3x3(64)) -> conv1x1(7) -> resize_bilinear -> softmax() -> Loss
End of explanation
"""
# this funcions takes logits and targets (y) and builds the loss subgraph
def build_loss(logits, y):
...
"""
Explanation: Loss
Now we are going to implement the build_loss function which will create nodes for loss computation and return the final tf.Tensor representing the scalar loss value.
Because segmentation is just classification on a pixel level we can again use the cross entropy loss function \(L\) between the target one-hot distribution \( \mathbf{y} \) and the predicted distribution from a softmax layer \( \mathbf{s} \). But compared to the image classification here we need to define the loss at each pixel. Below are the equations describing the loss for just one example (one pixel in our case).
$$
L = - \sum_{i=1}^{C} y_i log(s_j(\mathbf{x})) \
s_i(\mathbf{x}) = \frac{e^{x_i}}{\sum_{j=1}^{C} e^{x_j}} \
$$
End of explanation
"""
# create inputs
# create model
# create loss
# we will need argmax predictions for IoU
"""
Explanation: Putting it all together
Now we can use all the building blocks from above and construct the whole forward pass Tensorflow graph in just a couple of lines.
End of explanation
"""
# this functions trains the model
def train(sess, x, y, y_pred, loss, checkpoint_dir):
num_epochs = 30
batch_size = 10
log_dir = 'local/logs'
utils.clear_dir(log_dir)
utils.clear_dir(checkpoint_dir)
learning_rate = 1e-3
decay_power = 1.0
global_step = tf.Variable(0, trainable=False)
decay_steps = num_epochs * train_data.num_batches
# usually SGD learning rate is decreased over time which enables us
# to better fine-tune the parameters when close to solution
lr = tf.train.polynomial_decay(learning_rate, global_step, decay_steps,
end_learning_rate=0, power=decay_power)
...
sess = tf.Session()
train(sess, x, y, y_pred, loss, 'local/checkpoint1')
"""
Explanation: 3. Training the model
Training
During training we are going to compute the forward pass first to get the value of the loss function.
After that we are doing the backward pass and computing all gradients the loss wrt parameters at each layer with backpropagation.
End of explanation
"""
def validate(sess, data, x, y, y_pred, loss, draw_steps=0):
print('\nValidation phase:')
...
return utils.print_stats(conf_mat, 'Validation', Dataset.class_info)
sess = tf.Session()
train(sess, x, y, y_pred, loss, 'local/checkpoint1')
"""
Explanation: Validation
We usually evaluate the semantic segmentation results with Intersection over Union measure (IoU aka Jaccard index). Note that accurracy we used on MNIST image classification problem is a bad measure in this case because semantic segmentation datasets are often heavily imbalanced. First we compute IoU for each class in one-vs-all fashion (shown below) and then take the mean IoU (mIoU) over all classes. By taking the mean we are treating all classes as equally important.
In order to compute the IoU we are going to do the forward pass on validation data collect the confusion matrix first.
$$
IOU = \frac{TP}{TP + FN + FP}
$$
End of explanation
"""
# restore the checkpoint
...
"""
Explanation: Tensorboard
$ tensorboard --logdir=local/logs/
4. Restoring the pretrained network
End of explanation
"""
# upsampling layer
def upsample(x, skip, num_maps):
# this functions takes the input placeholder and the number of classes, builds the model and returns the logits
def build_model(x, num_classes):
sess.close()
tf.reset_default_graph()
# create inputs
# create model
# create loss
# we are going to need argmax predictions for IoU
sess = tf.Session()
train(sess, x, y, y_pred, loss, 'local/checkpoint2')
# restore the checkpoint
...
"""
Explanation: Day 4
5. Improved model with skip connections
In this part we are going to improve on the previous model by adding skip connections. The role of the skip connections will be to restore the information lost due to downsampling.
End of explanation
"""
|
tensorflow/docs | site/en/r1/tutorials/keras/basic_regression.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
# Use seaborn for pairplot
!pip install seaborn
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import tensorflow.compat.v1 as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
"""
Explanation: Regression: predict fuel efficiency
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a classification problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture).
This notebook uses the classic Auto MPG Dataset and builds a model to predict the fuel efficiency of late-1970s and early 1980s automobiles. To do this, we'll provide the model with a description of many automobiles from that time period. This description includes attributes like: cylinders, displacement, horsepower, and weight.
This example uses the tf.keras API, see this guide for details.
End of explanation
"""
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
"""
Explanation: The Auto MPG dataset
The dataset is available from the UCI Machine Learning Repository.
Get the data
First download the dataset.
End of explanation
"""
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
"""
Explanation: Import it using pandas
End of explanation
"""
dataset.isna().sum()
"""
Explanation: Clean the data
The dataset contains a few unknown values.
End of explanation
"""
dataset = dataset.dropna()
"""
Explanation: To keep this initial tutorial simple drop those rows.
End of explanation
"""
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
"""
Explanation: The "Origin" column is really categorical, not numeric. So convert that to a one-hot:
End of explanation
"""
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
"""
Explanation: Split the data into train and test
Now split the dataset into a training set and a test set.
We will use the test set in the final evaluation of our model.
End of explanation
"""
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
plt.show()
"""
Explanation: Inspect the data
Have a quick look at the joint distribution of a few pairs of columns from the training set.
End of explanation
"""
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
"""
Explanation: Also look at the overall statistics:
End of explanation
"""
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
"""
Explanation: Split features from labels
Separate the target value, or "label", from the features. This label is the value that you will train the model to predict.
End of explanation
"""
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
"""
Explanation: Normalize the data
Look again at the train_stats block above and note how different the ranges of each feature are.
It is good practice to normalize features that use different scales and ranges. Although the model might converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input.
Note: Although we intentionally generate these statistics from only the training dataset, these statistics will also be used to normalize the test dataset. We need to do that to project the test dataset into the same distribution that the model has been trained on.
End of explanation
"""
def build_model():
model = keras.Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = build_model()
"""
Explanation: This normalized data is what we will use to train the model.
Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied to any other data that is fed to the model, along with the one-hot encoding that we did earlier. That includes the test set as well as live data when the model is used in production.
The model
Build the model
Let's build our model. Here, we'll use a Sequential model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, build_model, since we'll create a second model, later on.
End of explanation
"""
model.summary()
"""
Explanation: Inspect the model
Use the .summary method to print a simple description of the model
End of explanation
"""
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
"""
Explanation: Now try out the model. Take a batch of 10 examples from the training data and call model.predict on it.
End of explanation
"""
# Display training progress by printing a single dot for each completed epoch
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
"""
Explanation: It seems to be working, and it produces a result of the expected shape and type.
Train the model
Train the model for 1000 epochs, and record the training and validation accuracy in the history object.
End of explanation
"""
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
"""
Explanation: Visualize the model's training progress using the stats stored in the history object.
End of explanation
"""
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
"""
Explanation: This graph shows little improvement, or even degradation in the validation error after about 100 epochs. Let's update the model.fit call to automatically stop training when the validation score doesn't improve. We'll use an EarlyStopping callback that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training.
You can learn more about this callback here.
End of explanation
"""
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
"""
Explanation: The graph shows that on the validation set, the average error is usually around +/- 2 MPG. Is this good? We'll leave that decision up to you.
Let's see how well the model generalizes by using the test set, which we did not use when training the model. This tells us how well we can expect the model to predict when we use it in the real world.
End of explanation
"""
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
plt.show()
"""
Explanation: Make predictions
Finally, predict MPG values using data in the testing set:
End of explanation
"""
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
plt.show()
"""
Explanation: It looks like our model predicts reasonably well. Let's take a look at the error distribution.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_ica_from_raw.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample
"""
Explanation: Compute ICA on MEG data and remove artifacts
ICA is fit to MEG raw data.
The sources matching the ECG and EOG are automatically found and displayed.
Subsequently, artifact detection and rejection quality are assessed.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, None, fir_design='firwin') # already lowpassed @ 40
raw.annotations = mne.Annotations([1], [10], 'BAD')
raw.plot(block=True)
# For the sake of example we annotate first 10 seconds of the recording as
# 'BAD'. This part of data is excluded from the ICA decomposition by default.
# To turn this behavior off, pass ``reject_by_annotation=False`` to
# :meth:`mne.preprocessing.ICA.fit`.
raw.annotations = mne.Annotations([0], [10], 'BAD')
"""
Explanation: Setup paths and prepare raw data.
End of explanation
"""
ica = ICA(n_components=0.95, method='fastica', random_state=0, max_iter=100)
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13),
verbose='warning') # low iterations -> does not fully converge
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components
"""
Explanation: 1) Fit ICA model using the FastICA algorithm.
Other available choices are picard, infomax or extended-infomax.
<div class="alert alert-info"><h4>Note</h4><p>The default method in MNE is FastICA, which along with Infomax is
one of the most widely used ICA algorithm. Picard is a
new algorithm that is expected to converge faster than FastICA and
Infomax, especially when the aim is to recover accurate maps with
a low tolerance parameter, see [1]_ for more information.</p></div>
We pass a float value between 0 and 1 to select n_components based on the
percentage of variance explained by the PCA components.
End of explanation
"""
title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds
"""
Explanation: 2) identify bad components by analyzing latent sources.
End of explanation
"""
# estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs)
"""
Explanation: 3) Assess component selection and unmixing quality.
End of explanation
"""
|
YihaoLu/statsmodels | examples/notebooks/statespace_structural_harvey_jaeger.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from IPython.display import display, Latex
"""
Explanation: Detrending, Stylized Facts and the Business Cycle
In an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.
Their paper begins:
"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step
in macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic
properties of the data and (2) present meaningful information."
In particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.
Statsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.
End of explanation
"""
# Datasets
from pandas.io.data import DataReader
# Get the raw data
start = '1948-01'
end = '2008-01'
us_gnp = DataReader('GNPC96', 'fred', start=start, end=end)
us_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)
us_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS')
recessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS', how='last').values[:,0]
# Construct the dataframe
dta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)
dta.columns = ['US GNP','US Prices','US monetary base']
dates = dta.index._mpl_repr()
"""
Explanation: Unobserved Components
The unobserved components model available in Statsmodels can be written as:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{\gamma{t}}{\text{seasonal}} + \underbrace{c{t}}{\text{cycle}} + \sum{j=1}^k \underbrace{\beta_j x_{jt}}{\text{explanatory}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
see Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.
Trend
The trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.
$$
\begin{align}
\underbrace{\mu_{t+1}}{\text{level}} & = \mu_t + \nu_t + \eta{t+1} \qquad & \eta_{t+1} \sim N(0, \sigma_\eta^2) \\
\underbrace{\nu_{t+1}}{\text{trend}} & = \nu_t + \zeta{t+1} & \zeta_{t+1} \sim N(0, \sigma_\zeta^2) \
\end{align}
$$
where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.
For both elements (level and trend), we can consider models in which:
The element is included vs excluded (if the trend is included, there must also be a level included).
The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)
The only additional parameters to be estimated via MLE are the variances of any included stochastic components.
This leads to the following specifications:
| | Level | Trend | Stochastic Level | Stochastic Trend |
|----------------------------------------------------------------------|-------|-------|------------------|------------------|
| Constant | ✓ | | | |
| Local Level <br /> (random walk) | ✓ | | ✓ | |
| Deterministic trend | ✓ | ✓ | | |
| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |
| Local linear trend | ✓ | ✓ | ✓ | ✓ |
| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |
Seasonal
The seasonal component is written as:
<span>$$
\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \qquad \omega_t \sim N(0, \sigma_\omega^2)
$$</span>
The periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.
The variants of this model are:
The periodicity s
Whether or not to make the seasonal effects stochastic.
If the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).
Cycle
The cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between "1.5 and 12 years" (see Durbin and Koopman).
The cycle is written as:
<span>$$
\begin{align}
c_{t+1} & = c_t \cos \lambda_c + c_t^ \sin \lambda_c + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \\
c_{t+1}^ & = -c_t \sin \lambda_c + c_t^ \cos \lambda_c + \tilde \omega_t^ & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)
\end{align}
$$</span>
The parameter $\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).
Irregular
The irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.
$$
\varepsilon_t \sim N(0, \sigma_\varepsilon^2)
$$
In some cases, we may want to generalize the irregular component to allow for autoregressive effects:
$$
\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2)
$$
In this case, the autoregressive parameters would also be estimated via MLE.
Regression effects
We may want to allow for explanatory variables by including additional terms
<span>$$
\sum_{j=1}^k \beta_j x_{jt}
$$</span>
or for intervention effects by including
<span>$$
\begin{align}
\delta w_t \qquad \text{where} \qquad w_t & = 0, \qquad t < \tau, \\
& = 1, \qquad t \ge \tau
\end{align}
$$</span>
These additional parameters could be estimated via MLE or by including them as components of the state space formulation.
Data
Following Harvey and Jaeger, we will consider the following time series:
US real GNP, "output", (GNPC96)
US GNP implicit price deflator, "prices", (GNPDEF)
US monetary base, "money", (AMBSL)
The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.
All data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.
End of explanation
"""
# Plot the data
ax = dta.plot(figsize=(13,3))
ylim = ax.get_ylim()
ax.xaxis.grid()
ax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);
"""
Explanation: To get a sense of these three variables over the timeframe, we can plot them:
End of explanation
"""
# Model specifications
# Unrestricted model, using string specification
unrestricted_model = {
'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Unrestricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# local linear trend model with a stochastic damped cycle:
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
# The restricted model forces a smooth trend
restricted_model = {
'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Restricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# smooth trend model with a stochastic damped cycle. Notice
# that the difference from the local linear trend model is that
# `stochastic_level=False` here.
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
"""
Explanation: Model
Since the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{c{t}}{\text{cycle}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
The irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:
Local linear trend (the "unrestricted" model)
Smooth trend (the "restricted" model, since we are forcing $\sigma_\eta = 0$)
Below, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.
End of explanation
"""
# Output
output_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)
output_res = output_mod.fit(method='powell', disp=False)
# Prices
prices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)
prices_res = prices_mod.fit(method='powell', disp=False)
prices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)
prices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)
# Money
money_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)
money_res = money_mod.fit(method='powell', disp=False)
money_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)
money_restricted_res = money_restricted_mod.fit(method='powell', disp=False)
"""
Explanation: We now fit the following models:
Output, unrestricted model
Prices, unrestricted model
Prices, restricted model
Money, unrestricted model
Money, restricted model
End of explanation
"""
print(output_res.summary())
"""
Explanation: Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.
End of explanation
"""
fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));
"""
Explanation: For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.
The plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
End of explanation
"""
# Create Table I
table_i = np.zeros((5,6))
start = dta.index[0]
end = dta.index[-1]
time_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)
models = [
('US GNP', time_range, 'None'),
('US Prices', time_range, 'None'),
('US Prices', time_range, r'$\sigma_\eta^2 = 0$'),
('US monetary base', time_range, 'None'),
('US monetary base', time_range, r'$\sigma_\eta^2 = 0$'),
]
index = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])
parameter_symbols = [
r'$\sigma_\zeta^2$', r'$\sigma_\eta^2$', r'$\sigma_\kappa^2$', r'$\rho$',
r'$2 \pi / \lambda_c$', r'$\sigma_\varepsilon^2$',
]
i = 0
for res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):
if res.model.stochastic_level:
(sigma_irregular, sigma_level, sigma_trend,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
else:
(sigma_irregular, sigma_level,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
sigma_trend = np.nan
period_cycle = 2 * np.pi / frequency_cycle
table_i[i, :] = [
sigma_level*1e7, sigma_trend*1e7,
sigma_cycle*1e7, damping_cycle, period_cycle,
sigma_irregular*1e7
]
i += 1
pd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')
table_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)
table_i
"""
Explanation: Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/recommendation_systems/solutions/multitask.ipynb | apache-2.0 | # Installing the necessary libraries.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
"""
Explanation: Multi-task recommenders
Learning Objectives
1. Training a model which focuses on ratings.
2. Training a model which focuses on retrieval.
3. Training a joint model that assigns positive weights to both ratings & retrieval models.
Introduction
In the basic retrieval notebook we built a retrieval system using movie watches as positive interaction signals.
In many applications, however, there are multiple rich sources of feedback to draw upon. For example, an e-commerce site may record user visits to product pages (abundant, but relatively low signal), image clicks, adding to cart, and, finally, purchases. It may even record post-purchase signals such as reviews and returns.
Integrating all these different forms of feedback is critical to building systems that users love to use, and that do not optimize for any one metric at the expense of overall performance.
In addition, building a joint model for multiple tasks may produce better results than building a number of task-specific models. This is especially true where some data is abundant (for example, clicks), and some data is sparse (purchases, returns, manual reviews). In those scenarios, a joint model may be able to use representations learned from the abundant task to improve its predictions on the sparse task via a phenomenon known as transfer learning. For example, this paper shows that a model predicting explicit user ratings from sparse user surveys can be substantially improved by adding an auxiliary task that uses abundant click log data.
In this jupyter notebook, we are going to build a multi-objective recommender for Movielens, using both implicit (movie watches) and explicit signals (ratings).
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Imports
Let's first get our imports out of the way.
End of explanation
"""
# Importing the necessary modules
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
"""
Explanation: NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.
End of explanation
"""
ratings = tfds.load('movielens/100k-ratings', split="train")
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
})
movies = movies.map(lambda x: x["movie_title"])
"""
Explanation: Preparing the dataset
We're going to use the Movielens 100K dataset.
End of explanation
"""
# Randomly shuffle data and split between train and test.
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
movie_titles = movies.batch(1_000)
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
"""
Explanation: And repeat our preparations for building vocabularies and splitting the data into a train and a test set:
End of explanation
"""
class MovielensModel(tfrs.models.Model):
def __init__(self, rating_weight: float, retrieval_weight: float) -> None:
# We take the loss weights in the constructor: this allows us to instantiate
# several model objects with different loss weights.
super().__init__()
embedding_dimension = 32
# User and movie models.
self.movie_model: tf.keras.layers.Layer = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
self.user_model: tf.keras.layers.Layer = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
# A small model to take in user and movie embeddings and predict ratings.
# We can make this as complicated as we want as long as we output a scalar
# as our prediction.
self.rating_model = tf.keras.Sequential([
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(1),
])
# The tasks.
self.rating_task: tf.keras.layers.Layer = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()],
)
self.retrieval_task: tf.keras.layers.Layer = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.movie_model)
)
)
# The loss weights.
self.rating_weight = rating_weight
self.retrieval_weight = retrieval_weight
def call(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# We pick out the user features and pass them into the user model.
user_embeddings = self.user_model(features["user_id"])
# And pick out the movie features and pass them into the movie model.
movie_embeddings = self.movie_model(features["movie_title"])
return (
user_embeddings,
movie_embeddings,
# We apply the multi-layered rating model to a concatentation of
# user and movie embeddings.
self.rating_model(
tf.concat([user_embeddings, movie_embeddings], axis=1)
),
)
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
ratings = features.pop("user_rating")
user_embeddings, movie_embeddings, rating_predictions = self(features)
# We compute the loss for each task.
rating_loss = self.rating_task(
labels=ratings,
predictions=rating_predictions,
)
retrieval_loss = self.retrieval_task(user_embeddings, movie_embeddings)
# And combine them using the loss weights.
return (self.rating_weight * rating_loss
+ self.retrieval_weight * retrieval_loss)
"""
Explanation: A multi-task model
There are two critical parts to multi-task recommenders:
They optimize for two or more objectives, and so have two or more losses.
They share variables between the tasks, allowing for transfer learning.
In this jupyter notebook, we will define our models as before, but instead of having a single task, we will have two tasks: one that predicts ratings, and one that predicts movie watches.
The user and movie models are as before:
```python
user_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
# We add 1 to account for the unknown token.
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
```
However, now we will have two tasks. The first is the rating task:
python
tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()],
)
Its goal is to predict the ratings as accurately as possible.
The second is the retrieval task:
python
tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128)
)
)
As before, this task's goal is to predict which movies the user will or will not watch.
Putting it together
We put it all together in a model class.
The new component here is that - since we have two tasks and two losses - we need to decide on how important each loss is. We can do this by giving each of the losses a weight, and treating these weights as hyperparameters. If we assign a large loss weight to the rating task, our model is going to focus on predicting ratings (but still use some information from the retrieval task); if we assign a large loss weight to the retrieval task, it will focus on retrieval instead.
End of explanation
"""
# Here, configuring the model with losses and metrics.
# TODO 1: Here is your code.
model = MovielensModel(rating_weight=1.0, retrieval_weight=0.0)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
# Training the ratings model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
"""
Explanation: Rating-specialized model
Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings.
End of explanation
"""
# Here, configuring the model with losses and metrics.
# TODO 2: Here is your code.
model = MovielensModel(rating_weight=0.0, retrieval_weight=1.0)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
# Training the retrieval model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
"""
Explanation: The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches.
Retrieval-specialized model
Let's now try a model that focuses on retrieval only.
End of explanation
"""
# Here, configuring the model with losses and metrics.
# TODO 3: Here is your code.
model = MovielensModel(rating_weight=1.0, retrieval_weight=1.0)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
# Training the joint model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
"""
Explanation: We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings.
Joint model
Let's now train a model that assigns positive weights to both tasks.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/hadgem3-gc31-hm/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hm', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-HM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
daniestevez/jupyter_notebooks | AmicalSat/ShockBurst image packets.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
"""
Explanation: AmicalSat ShockBurst image packets processing
This notebook shows how to process ShockBurst S-band image packets to reassemble the image file
End of explanation
"""
data = np.fromfile('/home/daniel/debian_testing_chroot/tmp/shockburst.u8', dtype = 'uint8').reshape((-1,34))
"""
Explanation: The data shockburst.u8 contains ShockBurst frames without the 0xE7E7E7E7E7 address header (including frame counter, image payload and CRC). It has been obtained with nrf24.grc.
End of explanation
"""
crc_table = [
0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50a5, 0x60c6, 0x70e7,
0x8108, 0x9129, 0xa14a, 0xb16b, 0xc18c, 0xd1ad, 0xe1ce, 0xf1ef,
0x1231, 0x0210, 0x3273, 0x2252, 0x52b5, 0x4294, 0x72f7, 0x62d6,
0x9339, 0x8318, 0xb37b, 0xa35a, 0xd3bd, 0xc39c, 0xf3ff, 0xe3de,
0x2462, 0x3443, 0x0420, 0x1401, 0x64e6, 0x74c7, 0x44a4, 0x5485,
0xa56a, 0xb54b, 0x8528, 0x9509, 0xe5ee, 0xf5cf, 0xc5ac, 0xd58d,
0x3653, 0x2672, 0x1611, 0x0630, 0x76d7, 0x66f6, 0x5695, 0x46b4,
0xb75b, 0xa77a, 0x9719, 0x8738, 0xf7df, 0xe7fe, 0xd79d, 0xc7bc,
0x48c4, 0x58e5, 0x6886, 0x78a7, 0x0840, 0x1861, 0x2802, 0x3823,
0xc9cc, 0xd9ed, 0xe98e, 0xf9af, 0x8948, 0x9969, 0xa90a, 0xb92b,
0x5af5, 0x4ad4, 0x7ab7, 0x6a96, 0x1a71, 0x0a50, 0x3a33, 0x2a12,
0xdbfd, 0xcbdc, 0xfbbf, 0xeb9e, 0x9b79, 0x8b58, 0xbb3b, 0xab1a,
0x6ca6, 0x7c87, 0x4ce4, 0x5cc5, 0x2c22, 0x3c03, 0x0c60, 0x1c41,
0xedae, 0xfd8f, 0xcdec, 0xddcd, 0xad2a, 0xbd0b, 0x8d68, 0x9d49,
0x7e97, 0x6eb6, 0x5ed5, 0x4ef4, 0x3e13, 0x2e32, 0x1e51, 0x0e70,
0xff9f, 0xefbe, 0xdfdd, 0xcffc, 0xbf1b, 0xaf3a, 0x9f59, 0x8f78,
0x9188, 0x81a9, 0xb1ca, 0xa1eb, 0xd10c, 0xc12d, 0xf14e, 0xe16f,
0x1080, 0x00a1, 0x30c2, 0x20e3, 0x5004, 0x4025, 0x7046, 0x6067,
0x83b9, 0x9398, 0xa3fb, 0xb3da, 0xc33d, 0xd31c, 0xe37f, 0xf35e,
0x02b1, 0x1290, 0x22f3, 0x32d2, 0x4235, 0x5214, 0x6277, 0x7256,
0xb5ea, 0xa5cb, 0x95a8, 0x8589, 0xf56e, 0xe54f, 0xd52c, 0xc50d,
0x34e2, 0x24c3, 0x14a0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405,
0xa7db, 0xb7fa, 0x8799, 0x97b8, 0xe75f, 0xf77e, 0xc71d, 0xd73c,
0x26d3, 0x36f2, 0x0691, 0x16b0, 0x6657, 0x7676, 0x4615, 0x5634,
0xd94c, 0xc96d, 0xf90e, 0xe92f, 0x99c8, 0x89e9, 0xb98a, 0xa9ab,
0x5844, 0x4865, 0x7806, 0x6827, 0x18c0, 0x08e1, 0x3882, 0x28a3,
0xcb7d, 0xdb5c, 0xeb3f, 0xfb1e, 0x8bf9, 0x9bd8, 0xabbb, 0xbb9a,
0x4a75, 0x5a54, 0x6a37, 0x7a16, 0x0af1, 0x1ad0, 0x2ab3, 0x3a92,
0xfd2e, 0xed0f, 0xdd6c, 0xcd4d, 0xbdaa, 0xad8b, 0x9de8, 0x8dc9,
0x7c26, 0x6c07, 0x5c64, 0x4c45, 0x3ca2, 0x2c83, 0x1ce0, 0x0cc1,
0xef1f, 0xff3e, 0xcf5d, 0xdf7c, 0xaf9b, 0xbfba, 0x8fd9, 0x9ff8,
0x6e17, 0x7e36, 0x4e55, 0x5e74, 0x2e93, 0x3eb2, 0x0ed1, 0x1ef0
]
def crc(frame):
c = 0xB95E # CRC of initial E7E7E7E7E7 address field
for b in frame:
tbl_idx = ((c >> 8) ^ b) & 0xff
c = (crc_table[tbl_idx] ^ (c << 8)) & 0xffff
return c & 0xffff
crc_ok = np.array([crc(d) == 0 for d in data])
frame_count = data[crc_ok,:2].ravel().view('uint16')
frame_count_unique = np.unique(frame_count)
"""
Explanation: The CRC used in ShockBurst frames CRC16_CCITT_FALSE from this online calculator. Since the 0xE7E7E7E7E7 address is included in the CRC calculation but is missing in our data, we take this into account by modifying the initial XOR value.
End of explanation
"""
np.sum(np.diff(frame_count_unique)-1)
np.where(np.diff(frame_count_unique)-1)
"""
Explanation: Number of skipped frames:
End of explanation
"""
len(frame_count_unique)
plt.plot(np.diff(frame_count_unique)!=1)
"""
Explanation: Number of correct frames:
End of explanation
"""
frame_size = 30
with open('/tmp/file', 'wb') as f:
for count in frame_count_unique:
valid_frames = data[crc_ok][frame_count == count]
counter = Counter([bytes(frame[2:]) for frame in valid_frames])
f.seek(count * frame_size)
f.write(counter.most_common()[0][0])
"""
Explanation: Write frames to a file according to their frame number. We do a majority voting to select among different frames with the same frame number (there are corrupted frames with good CRC). The file has gaps with zeros where frames are missing.
End of explanation
"""
|
mrcslws/nupic.research | projects/archive/dynamic_sparse/notebooks/mcaporale/2019-10-07--Experiment-Analysis-NonBinaryHeb.ipynb | agpl-3.0 | from IPython.display import Markdown, display
%load_ext autoreload
%autoreload 2
import sys
import itertools
sys.path.append("../../")
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
base = 'gsc-trials-2019-10-07'
exp_names = [
'gsc-BaseModel',
'gsc-Static',
'gsc-Heb-nonbinary',
'gsc-WeightedMag-nonbinary',
'gsc-WeightedMag',
'gsc-SET',
]
exps = [
os.path.join(base, exp) for exp in exp_names
]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
for p in paths:
print(os.path.exists(p), p)
df = load_many(paths)
# remove nans where appropriate
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
# distill certain values
df['on_perc'] = df['on_perc'].replace('None-None-0.1-None', 0.1, regex=True)
df['on_perc'] = df['on_perc'].replace('None-None-0.4-None', 0.4, regex=True)
df['on_perc'] = df['on_perc'].replace('None-None-0.02-None', 0.02, regex=True)
df['prune_methods'] = df['prune_methods'].replace('None-None-dynamic-linear-None', 'dynamic-linear', regex=True)
# def model_name(row):
# col = 'Experiment Name'
# for exp in exp_names:
# if exp in row[col]:
# return exp
# # if row[col] == 'DSNNWeightedMag':
# # return 'DSNN-WM'
# # elif row[col] == 'DSNNMixedHeb':
# # if row['hebbian_prune_perc'] == 0.3:
# # return 'SET'
# # elif row['weight_prune_perc'] == 0.3:
# # return 'DSNN-Heb'
# # elif row[col] == 'SparseModel':
# # return 'Static'
# assert False, "This should cover all cases. Got {}".format(row[col])
# df['model2'] = df.apply(model_name, axis=1)
df.iloc[34]
df.groupby('experiment_base_path')['experiment_base_path'].count()
# Did anything fail?
df[df["epochs"] < 30]["epochs"].count()
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
type(np.nan)
df['on_perc'][0] is nan
"""
Explanation: Experiment
Run Hebbian pruning with non-binary activations.
Motivation
Attempt pruning given intuition offered up in "Memory Aware Synapses" paper:
* The weights with higher coactivations computed as $x_i \times x_j$
have a greater effect on the L2 norm of the layers output. Here $x_i$ and $x_j$ are
the input and output activations respectively.
End of explanation
"""
fltr = (df['experiment_base_path'] == 'gsc-BaseModel')
agg(['model'], fltr)
"""
Explanation: Dense Model
End of explanation
"""
# 2% sparse
fltr = (df['experiment_base_path'] == 'gsc-Static')
agg(['model'], fltr)
"""
Explanation: Static Sparse
End of explanation
"""
# 2% sparse
# 2% sparse
combos = {
'experiment_base_path': ['gsc-WeightedMag', 'gsc-WeightedMag-nonbinary'],
'hebbian_grow': [True, False],
}
combos = [[(k, v_i) for v_i in v] for k, v in combos.items()]
combos = list(itertools.product(*combos))
for c in combos:
fltr = None
summary = []
for restraint in c:
rname = restraint[0]
rcond = restraint[1]
summary.append("{}={} ".format(rname, rcond))
new_fltr = df[rname] == rcond
if fltr is not None:
fltr = fltr & new_fltr
else:
fltr = new_fltr
summary = Markdown("### " + " / ".join(summary))
display(summary)
display(agg(['experiment_base_path'], fltr))
print('\n\n\n\n')
"""
Explanation: Weighted Magnitude
End of explanation
"""
# 2% sparse
fltr = (df['experiment_base_path'] == 'gsc-SET')
display(agg(['model'], fltr))
"""
Explanation: SET
End of explanation
"""
# 2% sparse
combos = {
'hebbian_grow': [True, False],
'moving_average_alpha': [0.6, 0.8, 1.0],
'reset_coactivations': [True, False],
}
combos = [[(k, v_i) for v_i in v] for k, v in combos.items()]
combos = list(itertools.product(*combos))
for c in combos:
fltr = None
summary = []
for restraint in c:
rname = restraint[0]
rcond = restraint[1]
summary.append("{}={} ".format(rname, rcond))
new_fltr = df[rname] == rcond
if fltr is not None:
fltr = fltr & new_fltr
else:
fltr = new_fltr
summary = Markdown("### " + " / ".join(summary))
display(summary)
display(agg(['experiment_base_path'], fltr))
print('\n\n\n\n')
d = {'b':4}
'b' in d
"""
Explanation: Hebbien
End of explanation
"""
|
csc-training/python-introduction | notebooks/examples/6 - Objects.ipynb | mit | class Student(object):
"""
The above states that the code-block (indented area) below will define a
class Student, that derives from a class called 'object'. Inheriting from 'object' is S
"""
def __init__(self, name, birthyear, interest=None):
"""__init__ is special method that is called when instantiating the object.
Typically the methods can then be used to """
self.name = name
self.birthyear = birthyear
self.interest = interest
def say_hi(self):
""" This is a classical example of a function that prints something.
The more complex your system, the less likely it is that it is a good idea to print anything other than
warnings from whithin your classes."""
if not self.interest:
print("Hi, my name is " + self.name + "!")
else:
print("Hi, my name is " + self.name + " and I'm interested in " + self.interest + ".")
def get_age(self):
""" This is a much more style-pure example of classes.
Recording a birthyear instead of age is a good idea because next year we'll all be a year older.
However requiring everyone who uses your class is impolite and would lead to duplicate code.
Doing it once and asking everyone to use that implementation reduces code complexity and improves
maintainability.
"""
import datetime
return datetime.datetime.now().year-self.birthyear
"""
Explanation: Objects
An object is a combination of data and methods associated with the data.
Here's a concrete example with docstrings explaining what goes on in each part.
End of explanation
"""
jyry = Student("Jyry", 1984, interest="Python")
"""
Explanation: The above construct is a class, which is to say a model for creating objects.
To create an object we say we instantiate a class.
End of explanation
"""
jyry.say_hi()
print(jyry.birthyear)
"""
Explanation: Now we have an object called "jyry", which has the value
s listed above. We can call methods of the object and access the variables associated with the object.
End of explanation
"""
tuomas = Student("Tuomas", 1984, interest="Java")
tuomas.say_hi()
"""
Explanation: One can create multiple objects that all have their own identity, even though they share some variables.
End of explanation
"""
tuomas == jyry
"""
Explanation: Typically object comparison is done using the same syntax as for basic types (which, by the way are objects too in Python).
If you want to implement special logic for comparisons in your own classes, look up magic methods either online or in another part of this introduction. It is a very common task and helps people who use your code (i.e. you).
End of explanation
"""
jyry.interest = "teaching"
jyry.say_hi()
"""
Explanation: Python permits the programmer to edit objects without any access control mechanics. See for example.
End of explanation
"""
fobj = open("../data/grep.txt")
"""
Explanation: Figuring out an object
Opening a file using the open method returns an object.
End of explanation
"""
print(fobj)
dir(fobj)
help(jyry.say_hi)
jyry.say_hi.__doc__
"""
Explanation: How can we find things out about this object? Below are a few examples:
printing calls the __str__()-method of the object, which should return a (more or less) human-readable definition of the object
dir() lists the attributes of an object, that is to say functions and variables associated with it
the help-function attempts to find the docstring for your function
the __doc__ attribute of object members contains the docstring if available to the interpreter
This list is not comprehensive.
End of explanation
"""
class Container(object):
def __init__(self):
self.bag = {}
def put(self, key, item):
self.bag[key] = item
def get(self, key):
return self.bag[key]
"""
Explanation: Exceptions
In Python, exceptions are lightweight, i.e. handling them doesn't cause a notable decrease in performance as happens in some languages.
The purpose of exceptions is to communicate that something didn't go right. The name of the exception typically tells what kind of error ocurred and the exception can also contain a more explicit message.
End of explanation
"""
container = Container()
container.put([1, 2, 3], "example")
container.get("not_in_it")
"""
Explanation: The container-class can exhibit at least two different exceptions.
End of explanation
"""
try:
container = Container()
container.put([1,2,3], "value")
except TypeError as err:
print("Stupid programmer caused an error: " + str(err))
"""
Explanation: Who should worry about the various issues is a good philosophical question. We could either make the Container-class secure in that it doesn't raise any errors to whoever calls it or we could let the caller worry about such errors.
For now let's assume that the programmer is competent and knows what is a valid key and what isn't.
End of explanation
"""
try:
container = Container()
container.put(3, "value")
container.get(3)
except TypeError as err:
print("Stupid programmer caused an error: " + str(err))
except KeyError as err:
print("Stupid programmer caused another error: " + str(err))
finally:
print("all is well in the end")
# go ahead, make changes that cause one of the exceptions to be raised
"""
Explanation: A try-except may contain a finallyblock, which is always guaranteed to execute.
Also, it is permissible to catch multiple different errors.
End of explanation
"""
try:
container = Container()
container.put(3, "value")
container.get(5)
except (TypeError, KeyError) as err:
print("please shoot me")
if type(err) == TypeError:
raise Exception("That's it I quit!")
else:
raise
"""
Explanation: There is also syntax for catching multiple error types in the same catch clause.
The keyword raise is used to continue error handling. This is useful if you want to log errors but let them pass onward anyway.
A raise without arguments will re-raise the error that was being handled.
End of explanation
"""
|
numerical-mooc/assignment-bank-2015 | Chris Tiu Project/Test2.ipynb | mit | import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family']= 'serif'
rcParams['font.size']=16
from IPython.display import Image
"""
Explanation: Copyright statement?
A New 'Transition'
Welcome back! At this point in the course you are all likley aces at numerical analysis, expecially after 5 modules ranging from Diffusion, to Convection, to Conduction, in 1 or 2D, with various Neumann and Dirichlet Boundary conditions!
But now it's time for a new "phase". In this lesson we begin looking at what happens when your nice rigid boundary conditions decide to start moving on you, essentially creating a moving boundary interface!
A moving boundary interface is represented by numerous physical behaviors in real-world applications, from the polar ice caps melting, to the phase transformations of metal alloys, to the varying oxygen content of muscles near a clotted bloodvessel [2].
#### Real World Applications of a Moving Boundary Interface
The Stefan Problem
This new type of problem is known as the "Stefan Problem" as it was first studied by Slovene physicist Jozef Stefan around 1890 [3]. Though his focus was primarily on the analysis of ice formations, nowadays his name is synonymous with the particular type of boundary value problem for PDEs where the boundary can move with time. Since the classic Stefan problem concentrated on the temperature distribution of a homogeneous material undergoing a phase transition, one of the most commonly studied Stefan problems today is the melting of Ice to Water!
#### Jozef Stefan pioneered work into phase transitions of materials (ie Ice)
A Review: 1D Heat Conduction
Recall from both Modules 2 and 4 we took a loook at the Diffusion equation in 1D:
$$\begin{equation}
\frac{\partial U}{\partial t} = \alpha \frac{\partial^2U}{\partial x^2}
\end{equation}$$
Where we have the temperature distribution $U(x,t)$ and the thermal diffusivity $\alpha$. While before we looked at the conduction of heat through a graphite rod of length 1 meter, in this scenario we will analyze heat conduction though a 1D rod of ice. Let's first list some basic coefficients associated with the new material:
Thermal Properties of Ice at ~$0^{\circ}C$:
Thermal Conductivity: $k = 2.22 \frac{W}{mK}$
Density: $\rho \approx 1000 \frac{kg}{m^3}$
Specific Heat: $c_{p}= 2.05x10^{-3} \frac{J}{kgK}$
and lastly, Thermal Diffusivity: $\alpha = \frac{k_{ice}}{\rho_{ice}c_{p_{ice}}} = 1.083x10^{-6} \frac{m^2}{sec}$
Melting Temperature of Ice is: $T_{melt}=0^{\circ}C$
Okay! With that out of the way, let's look at the temperature distribution across the a rod of ice of length 1m with some basic initial and boundary conditions. For this first scenario, we will not take into account phase transition if temperatures hit $T_{melt}$, and will therefore assume a static boundary condition.
Problem Setup
Governing Equation: $$\begin{equation}
\frac{\partial U}{\partial t} = \alpha \frac{\partial^2U}{\partial x^2}
\end{equation}$$
Boundary Conditions:
LHS: $\frac{\partial U}{\partial x} = -e^{\beta t}$, @x=0, t>0, time-dependant (increasing) heat flux in
RHS: $\frac{\partial U}{\partial x} = 0$, @x=L, t>0, Insulated end, no heat flux in/out.
Initial Conditions:
$U(x,t) = -10^{\circ}C$, for 0<x<L, t=0
Lets start coding!!
End of explanation
"""
def FTCS(U, nt, dt, dx, alpha, beta):
for n in range(nt):
Un=U.copy()
U[1:-1] = Un[1:-1] + alpha*(dt/dx**2)*(Un[2:]-2*Un[1:-1]+Un[0:-2])
#Boundary Conditions
U[-1]=U[-2] #RHS Insulated BC
t=n*dt #Increasing time = n*timestep(dt)
U[0]=numpy.exp(beta*t) #LHS Time dependant heat input in, BC
return U
#Basic Parameters and Initialization
# Temperature scale: Celsius
# Length scale: meters
# Mass scale: kg
# Time scale: seconds
# Energy scale: Joules
# Power scale: Watts
L = 1 # Length of my ice rod
nt = 40000 # Number of timesteps
nx = 51 # Number of grid space steps
alpha = 1.083e-6 # Thermal Diffusivity
dx = L/(nx-1) # grid spacing in "x"
Ui = numpy.ones(nx)*(-10) # initialized Temperature array
beta = 0.001 #Growth Factor of my Temperature input
sigma = 1/2 # Stability*
dt=0.1 # Timestep chosen to be 0.1 seconds*
"""
Explanation: Now let's define a function to run the governing equations using the Forward-Time/Central-Difference Method discretization
End of explanation
"""
x = numpy.linspace(0, 1, nx)
print('Initial ice rod temperature distribution')
pyplot.plot(x, Ui, color = '#003366', ls = '-', lw =4)
pyplot.ylim(-20, 60)
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
U = FTCS(Ui.copy(), nt, dt, dx, alpha, beta)
print('Total elapsed time is ', nt*dt, 'seconds, or', nt*dt/60, 'minutes' )
pyplot.plot(x, U, color = '#003366', ls = '-', lw =3)
pyplot.ylim(-20, 60)
pyplot.xlim(0, 1)
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
"""
Explanation: A word on Stability:
Recall from module 2 we had the Courant-Friedrichs-Lewy (CFL) Stability condition for the FTCS Diffusion equation in the form:
$$\sigma = \alpha \frac{\Delta t}{\Delta x^{2}} \leq \frac{1}{2} $$
re-arranging to determine the upper limit for a time-step (dt) we have:
$$\Delta t \leq \frac{\sigma \Delta x^{2}}{\alpha} \approx 923 seconds$$
As you can see, chosing a time-step (dt) equal to $0.1$ seconds, we more than satisfy this CFL Stability condition.
End of explanation
"""
def Phase_graph(U, x, nx):
phase=numpy.ones(nx)*(-100)
for n in range(nx):
if U[n]>0:
phase[n]=100
return phase
pyplot.plot(x, U, color = '#003366')
pyplot.ylabel('Temperature', color ='#003366')
Phase1=numpy.ones(nx)
Phase=Phase_graph(U, x, nx)
pyplot.plot(x, Phase, color = '#654321', ls = '-', lw =3)
pyplot.ylim(-12, 10)
pyplot.xlim(0, 0.3)
pyplot.xlabel('Approx Water-Ice Interface', color = '#654321')
"""
Explanation: The above figure shows us the temperature distribution of the ice rod at a point in time approximately 1 hour into applying our heat source. Now this would be all well and good if ice didn't melt (and therefore change phase) at zero degrees. But it does!!
Let's build a rudimentary function to see what portion of the rod should be water by now, and which part should still be ice:
End of explanation
"""
def Exact_Stefan( nt, dt, x):
U=numpy.ones(nx)
for n in range(nt):
U = numpy.exp(n*dt-x)-1
return U
dt_A=0.002
nt_A=500
ExactS=Exact_Stefan(nt_A, dt_A, x)
Max_tempA=max(ExactS)
pyplot.plot(x, ExactS, color = '#003366', ls = '--', lw =1)
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
print('Analytically, this is our temperature profile after:', nt_A*dt_A,'seconds')
print('The temperature at our LHS boundary is:', Max_tempA, 'degrees')
"""
Explanation: As you can see, the ice SHOULD have melted about 0.07 meters (or 2.75 inches) into our rod. In reality, our boundary interface has moved from x=0 to the right as time elapsed. Not only should our temperature distribution profile change due to the differences in properties ($\rho, k, c_{p}$), but also the feedback from the moving boundary condition.
Solutions to the Dimensionless Stefan problem:
Analytical Solution:
Before we continue we need to make some simplfications:
1) No convection, heat transfer is limited to conduction,
2) Pressure is constant,
3) Density does not change between the solid and liquid phase (Ice/Water), ie $\rho_{ice}=\rho_{water}\approx 1000 \frac{kg}{m^3}$,
4) The phase change interface at $s(t)$ has no thickness
Looking closer at the problem at hand, we see that our temperature distribution must conform to the below figure:
#### Phase Change Domain
As you can see, our temperature distribution experiences a discontinuity at our solid-liquid interface at x=s(t). Furthermore, this boundary moves to the right as time elapses. If we wish to analyze the distribution in one region or the other, we need to take into account a growing or shrinking domain, and therefore, the boundary of the domain has to be found as part of the solution [2]. In the Melting Problem depicted above, a moving interface separates the liquid and solid phases. The displacement of the boundary interface (denoted as $\dot{s}$) is driven by the heat transport through it [2]. The relationship between the moving interface, s(t), and the temperature distribution through it was developed by and is known as the Stefan Condition (or Stefan Equation) and takes the form of the following boundary conditions:
when x =s(t),
[1, 3, 2]
$$\dot{s} = \frac{ds}{dt} = -\frac{\partial U}{\partial x}$$
$$U=0$$
There are many solution methods for the Stefan problem, for this lesson we will focus on finding the temperature distribution in the liquid region only (ie 0<x<s(t)). Critical parameters to track are both temperature, U(x,t), and the position of the interface, s(t). Let's first solve this problem analytically:
To make our lives easier we want a simplified version of the Stefan Problem, In his paper, Crowley [4] presented that both Oleinik and Jerome demonstrated how using an appropriate dimensionless model, and appropriate boundary conditions, and initial conditions, not only could one determine a general solution for the diffusion equation with the Stefan condition, but also that an explicit finite difference scheme converges to this solution [4]. For our dimensionless model we will set $\alpha=1, \beta=1$ to get the following governing equations, and boundary conditions:
Dimensionless Stefan Problem Equations
$$\frac{\partial U}{\partial t} = \frac{\partial^{2} U}{\partial x^{2}}$$
$$s(t)$$
(1) $\frac{\partial U(x=0, t)}{\partial x} = -e^{t}$, LHS BC (Heat input into the system)
(2) $\frac{\partial U(x=s(t), t)}{\partial x} = -\frac{d s(t)}{d t}$, RHS BC, Stefan Condition
(3) $U(x=s(t), t) = T_{melt}=0$, By definition of the Melting interface
(4) s(t=0)=0, initial condition
These equations set up the new below figure:
Because we want to be able to judge the accuracy of our numerical analysis (and thanks to Crowley, Oleinik, and Jerome!) we next find the general solution of this problem by means of Separation of Variables:
SOV: $U(x,t)=X(x)T(t)$, gives us a general solution of the form:
$$U(x,t) = c_{1}e^{c_{2}^{2}t-c_{2}x} +c_{3}$$
from BC(1) we get $c_{2}=1, c_{1}=1, yielding: U(x,t) = e^{t-x} +c_{3}$
from BC(2) we get: $\dot{s}=\frac{\partial s(t)}{\partial t} = -\frac{\partial U(x=s(t), t)}{\partial x}\rightarrow -e^{t-s(t)}= \frac{ds(t)}{d t}$, this yields that the only solution fo s(t) is: $$s(t)=t$$
from BC(3) we get: $U(s(t),t) + c_{3} = 0, \rightarrow e^{t-t} +c_{3}=0, \rightarrow c_{3}=-1$,
Finally we get an exact solution for the temperature distribution:
$$U(x,t) = e^{t-x}-1$$, $$s(t) = t$$
and this satisfies the initial condition IC(4) that s(t=0)=0!!
And now you see why we chose the heat input function to be a time-dependant exponential, if you don't believe me that the analytical solution would be much nastier if you had chosesn say, a constant heat input, give SOV a try and let me know how it goes! Let's graph this solution and see if it makes sense.
End of explanation
"""
def VGM(nt, N, U0, dx, s, dt):
Uo=0 #Initial Temperature input
U0 = numpy.ones(N)*(Uo)
s0=0.02 #Initial Interface Position
#(cannot chose zero or our expressions would blow up!)
s=numpy.ones(nt)*(s0)
dx=numpy.ones(nt)*(s[1]/N)
sdot=(s[1]-s[0])/dt
s[1]= s[0] - (dt/(2*dx[0]))*(3*U0[N-1]-4*U0[N-2]+U0[N-3])
dx[0]=(s[1]/N)
for m in range(0, nt-1):
for i in range(0, N-1):
#LHS BC (x=0, i=0)
if(i==0):U0[i]= (1-2*(dt/dx[m]**2))*U0[i]+2*(dt/dx[m]**2)*U0[i+1] + (2*(dt/dx[m])-dt*(dx[m]*i)*sdot/s[m])*numpy.exp(m*dt)
#Governing Equation (B)
else: U0[i]= U0[i]+ ((dt*(dx[m]*i)*sdot)/(2*dx[m]*s[m]))*(U0[i+1]-U0[i-1]) + (dt/(dx[m]**2))*(U0[i+1]-2*U0[i]+U0[i-1])
#RHS BC (x=L, L=dx{m*N})
U0[N-1]=0
s[m+1]= s[m] - (dt/(2*dx[m]))*(3*U0[N-1]-4*U0[N-2]+U0[N-3]) #Heat Balance Equation, (E)
sdot=(s[m+1]-s[m])/dt #Updating Speed of Interface
dx[m+1]=(s[m+1]/N) #Updating dx for each time step m
if (U0[i]>0):
Location=s[m]
return U0, s, dx
# First we set up some new initial parameters.These values differ from our first example since we are now dealing with
# a new non-dimensionalized heat equation, and new governing equations.
#1.0 Numerical Solution Parameters:
dt=2.0e-6 # Size of our Time-step, Set constant, Chosen in accordance with Caldwell and Savovic [5]
nt=500000 # Number of time steps, chosen so that elapsed solution will determine heat diffusion after
# (nt*dt= 1.0 seconds) Just like in our Analytical Solution.
N = 10 # Number of spatial grid points, Chosen, for now, in accordance with Caldwell and Savovic [5]
U_VGM, s_VGM, dx_VGM = VGM(nt, N, U0, dx, s, dt)
print('Initial Position of the interface (So):', s_VGM[0])
print('Initial X_step(dx):', dx_VGM[0])
print('Time Step (dt):', dt)
"""
Explanation: Does the solution look right? YES! Remember, this temperature profile is "Dimensionless" so we can't compare it to our previous example (not only is the diffusivity constant vastly different, but we are looking 1 second into the diffusion vs 1 hr!). Also it accounts ONLY for the temperature distribution in the liquid, assuming an ever increasing domain due to an ever expanding RHS boundary. Not only did we expect temperature to be highest at the input side, but our moving boundary interface, s(t), moves to the right with time and always hits our melting temperature ($0^{\circ} c$) at x=1, just as one would expect.
Numerical Solution: The Variable Grid Method
What makes this problem unique is that we must now track the time-dependant moving boundary. Now, for previous numerical analysis in a 1D domain, you had a constant number of spatial intervals nx (or N) between your two boundaries x=0, and x=L. Thus your spatial grid steps dx were constant and defined as $dx = L/(nx-1)$. But now you have one fixed bounary and one moving boundary, and your domain increases as time passes. [1]
One of the key tenements of the Variable Grid Method for solving the Stefan probem is that you keep the NUMBER of grid points (nx) fixed, thus your grid SIZE (dx) will increase as the domain increases. Your grid size now varies dependant on the location of the interaface front, dx = s(t)/N.
$$dx = \frac{L}{N-1} \longrightarrow dx = \frac{s(t)}{N}$$
Now, while one might be tempted to view the figure above as the new "Variable Grid" stencil, one must remember that the "Y"-axis is time, and therefore as you move up you are moving forward in time, therefore a more accurate depiction of the FTCS method in stencil form for the Variable Grid method would be:
and so it becomes clear - that the our spatial step ($dx$) will be depentant on our time step ($m$)!!
Derivation of the new Governing Equation
Now let's set up the new governing equations to be discretized for our code. We know that the 1D Diffusion equation must be valid for all points on our spatial grid so we can re-write the LHS of equation:
$$\frac{\partial U}{\partial t} = \frac{\partial^{2} U}{\partial x^{2}}$$
to be $$ \frac{\partial U_{i}}{\partial t} = \frac{\partial U_{@t}}{\partial x}\frac{dx_{i}}{dt} + \frac{\partial U_{@x}}{\partial t}$$
we can use the expression: $$\frac{dx_{i}}{dt} = \frac{x_{i}}{s(t)}\frac{ds}{dt}$$ to track the movement of the node $i$.
Substituting these into the diffusion equation (and droping the i,t, and x indices since they are constant) we obtain a new governing equation for diffusion:
$$\frac{\partial U}{\partial t} = \frac{x_{i}}{s}\frac{ds}{dt}\frac{\partial U}{\partial x} + \frac{\partial^{2}U}{\partial x^{2}}$$
This is subject to the boundary and initial conditions (BC1-BC3, IC4) as stated for the 1D dimensionless Stefan problem above.
Discretization
and now we seek to discretize the new governing equations. For this code we will implement an explicit, FTCS scheme, with parameters taking a Taylor expansion centered about the node ($x_{i}^{m}$), and time ($t_{m}$) just like before. In the above equations we discritized the time derivatives of U using forward time, and the spatial derivitaives using centered space. We re-write ds/dt as $\dot{s}$ and leave it a variable for now. (EXERCISE: See if you can discretize this equation form memory!!)
Taylor Expand and Discretize: (moving from left to right)
$\frac{\partial U}{\partial t} \longrightarrow \frac{U_{i}^{m+1}-U_{i}^{m}}{\Delta t}$,
$\frac{x_{i}}{s} \longrightarrow \frac{x_{i}^{m}}{s_{m}}$,
$\frac{ds}{dt} \longrightarrow \dot{s_{m}}$,
$\frac{\partial U }{\partial x} \longrightarrow \frac{U_{i+1}^{m}-U_{i-1}^{m}}{2\Delta x^{m}}$,
$\frac{\partial^{2}U}{\partial x^{2}} \longrightarrow \frac{U_{i+1}^{m}-2U_{i}^{m}+U_{i-1}^{m}}{(\Delta x^{m})^{2}}$
And now we substitute, rearrange and solve for $U_{i}^{m+1}$ to get:
$U_{i}^{m+1}=U_{i}^{m}+\frac{\Delta t x_{i}^{m} \dot{s_{m}}}{2 \Delta x^{m}s_{m}}(U_{i+1}^{m}-U_{i-1}^{m}) + \frac{\Delta t}{(\Delta x^{m})^{2}}(U_{i+1}^{m}-2U_{i}^{m}+U_{i-1}^{m})$
Great!! We're almost ready to start coding, but before we begin, do you notice a problem with this expression? What about if i=0? Plug it in and you will see that we have expressions of the form $U_{-1}^{m}$ in both right hand terms, but that can't be right, looking at our stencil we see that $i=-1$ is off our grid! This is where the boundary conditions come in!
Generate discretization expressions for the boundary conditions at x=0 (LHS) and x=s(t) (RHS). For the RHS and the temperature gradient across the moving boundary interface we will use a three-term backward difference scheme:
LHS: $\frac{\partial U(x=0, t)}{\partial x} = -e^{t} \longrightarrow \frac{U_{i+1}^{m}(0,t)-U_{i-1}^{m}(0,t)}{2\Delta x^{m}} = e^{t_{m}}$, and
RHS: $\frac{\partial U(x=s(t), t)}{\partial x} = -\frac{d s(t)}{d t} \longrightarrow \frac{\partial U(x=s(t),t)}{\partial x} = \frac{3U_{N}^{m}-4U_{N-1}^{m}+U_{N-2}^{m}}{2\Delta x^{m}} $
from our LHS boundary condition expression, if we set $i=0$ and solve for $U_{-1}^{m}$ we get: $$U_{-1}^{m}=U_{1}^{m} + 2\Delta x^{m} e^{t_{m}}$$
We can now combine this expression and substitute into the governing equation to get expressions for diffusion at $i=0, i=1$ to $(N-1)$, and $i=N$:
$U_{i}^{m+1}=(1-2\frac{\Delta t}{(\Delta x^{m})^{2}})U_{i}^{m} + 2\frac{\Delta t}{(\Delta x^{m})^{2}}U_{i+1}^{m} + (2\frac{\Delta t}{\Delta x^{m}}-\frac{\Delta t x_{i}^{m}\dot{s_{m}}}{s_{m}})e^{t_{m}}$
$U_{i}^{m+1}=U_{i}^{m}+\frac{\Delta t x_{i}^{m} \dot{s_{m}}}{2 \Delta x^{m}s_{m}}(U_{i+1}^{m}-U_{i-1}^{m}) + \frac{\Delta t}{(\Delta x^{m})^{2}}(U_{i+1}^{m}-2U_{i}^{m}+U_{i-1}^{m})$
$U_{i}^{m}=0$, $i=N$
Phew!! Now that is one impressive looking set of discretization expressions. But, sorry, we aren't done yet..... In the above expressions what are we supposed to do about $\dot{s_{m}}$ and $s_{m}$?
Well luckily, that is where the Stefan condition comes in, remember:
$\dot{s_{m}} = \frac{ds}{dt}= -\frac{\partial U_{@x=s}}{\partial x} = -\frac{3U_{N}^{m}-4U_{N-1}^{m}+U_{N-2}^{m}}{2\Delta x^{m}} $, and as for $s_{m}$ that is just a Heat Balance equation:
$s_{m+1} = s_{m}+\dot{s_{m}}\Delta t = s_{m} - \frac{\Delta t}{2\Delta x^{m}}(3U_{N}^{m}-4U_{N-1}^{m}+U_{N-2}^{m})$, where $s_{0}=0$
To make our lives easier, instead of inserting these expressions for $\dot{s_{m}}$ and $s_{m}$ into our governing equations, lets keep them as coupled expressions to be calculated during the time loops and calculated at every grid point $i$.
lastly, lets not forget that the updated interface location $s_{m+1}$ and grid size $\Delta x^{m}$ are calculated at every timestep $m$, and have the relationship:
$$\Delta x^{m+1} = \frac{s_{m+1}}{N}$$
Don't worry, we are almost ready to code, I promise! There is just one more expression we need to consider, and that relates to Stability! Following the calculations of Caldwell and Savovic [5], for now, we will use something simple in order to limit the size of the timestep ($\Delta t$), that is: $$\frac{\Delta t}{(\Delta x^{m})^{2}} \leq 1$$, or
$$\Delta t \leq (\Delta x^{m})^{2}$$
Let's Code!!
To summarize our governing equations we need to code these 7 coupled equations:
(A) $u_{i}^{m+1}=(1-2\frac{\Delta t}{(\Delta x^{m})^{2}})u_{i}^{m} + 2\frac{\Delta t}{(\Delta x^{m})^{2}}u_{i+1}^{m} + (2\frac{\Delta t}{\Delta x^{m}}-\frac{\Delta t x_{i}^{m}\dot{s_{m}}}{s_{m}})e^{t_{m}}$
(B) $u_{i}^{m+1}=u_{i}^{m}+\frac{\Delta t x_{i}^{m} \dot{s_{m}}}{2 \Delta x^{m}s_{m}}(u_{i+1}^{m}-u_{i-1}^{m}) + \frac{\Delta t}{(\Delta x^{m})^{2}}(u_{i+1}^{m}-2u_{i}^{m}+u_{i-1}^{m})$
(C) $u_{i}^{m}=0$, $i=N$
(D) $\dot{s_{m}} = \frac{ds}{dt}= -\frac{\partial u_{@x=s}}{\partial x} = -\frac{3u_{N}^{m}-4u_{N-1}^{m}+u_{N-2}^{m}}{2\Delta x^{m}} $
(E) $s_{m+1} = s_{m}+\dot{s_{m}}\Delta t = s_{m} - \frac{\Delta t}{2\Delta x^{m}}(3u_{N}^{m}-4u_{N-1}^{m}+u_{N-2}^{m}), s_{0}=0$
(F) $\Delta x^{m+1} = \frac{s_{m+1}}{N}$
(G) $\Delta t \leq (\Delta x^{m})^{2}$
End of explanation
"""
XX = numpy.linspace(0, 1, N)
Max_tempVGM=max(U_VGM)
pyplot.plot(XX,U_VGM, color = '#654322', ls = '--', lw =4)
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
pyplot.ylim(-.5, 2)
pyplot.xlim(0, 1.0)
print('This is our VGM temperature profile after:', nt*dt, 'seconds')
print('The temperature at our LHS boundary is:', Max_tempVGM, 'degrees')
print('Final position of our interface is:', s_VGM[m])
print('The final speed of our interface is:', sdot)
print('Our grid spaceing (dx) after', nt*dt, 'seconds is:', dx_VGM[m] )
"""
Explanation: Results and Discussions
Ok! Now lets see what we get, and more importantly, if it makes any sense:
End of explanation
"""
pyplot.figure(figsize=(12,10))
pyplot.plot(XX,U_VGM, color = '#654322', ls = '--', lw =4, label = 'Variable Grid Method')
pyplot.plot(x, ExactS, color = '#003366', ls = '--', lw =1, label='Analytical Solution')
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
pyplot.ylim(0, 1.8)
pyplot.xlim(0, 1.0)
pyplot.legend();
print('Max error (@x=0) is:',abs(Max_tempVGM - Max_tempA)*(100/Max_tempA),'percent')
"""
Explanation: Hey now! That looks pretty good! Let's compare this result to our earlier Analytical (exact) solution to the 1D Dimensionless problem, for 1 second into the diffusion:
End of explanation
"""
Time=numpy.linspace(0,dt*nt,nt)
pyplot.figure(figsize=(10,6))
pyplot.plot(Time,dx_VGM, color = '#660033', ls = '--', lw =2, label='Spatial Grid Size')
pyplot.plot(Time,s_VGM,color = '#222222', ls = '-', lw =4, label='Interface Position, s(t)')
pyplot.xlabel('Time (Seconds)')
pyplot.ylabel('Length')
pyplot.legend();
"""
Explanation: Under 2% error is pretty good! What else should we verify? How about the change in the interface location s(t) and the size of our spatial grid (dx) over time?
End of explanation
"""
dt=2.0e-6
nt=500000
N = 14
U_VGM14, s_VGM14, dx_VGM14 = VGM(nt, N, U0, dx, s, dt)
XX = numpy.linspace(0, 1, N)
Max_tempVGM14=max(U_VGM14)
pyplot.plot(XX,U_VGM14, color = '#654322', ls = '--', lw =4, label = 'VGM')
pyplot.plot(x, ExactS, color = '#003366', ls = '--', lw =1, label='Analytical')
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
pyplot.ylim(-.5, 2)
pyplot.xlim(0, 1.0)
pyplot.legend();
print('Time Step dt is:', dt)
print('Initial spatial step, dx[0]^2 is:', dx_VGM14[0]**2 )
print('Max error (@x=0) is:',abs(Max_tempVGM14-Max_tempA)*(100/Max_tempA),'Percent')
"""
Explanation: and so we see that both the spatial grid size, dx, and the interface location, s(t), increase with time over the course of 1 second. This is exactly what we would expect! Infact, remember back to our anayltical solution, we solved the interface function to be: $s(t)=t$, which is exactly the graph that our numerical solution has given us!
Well, then! That's a wrap! Looks like everything is all accounted for, right?. . . . . . . . . . . . . .well, not exactly. There are still quite a few questions left unanswered, and now we must discuss the limitations in the Variable Grid Method of numerical implementation.
Limitations of the Variable Grid Method
A number of questions should be coming to mind, among those are:
1) Why do we only analyze for 1 second of diffusion?
2) Why is your VGM timestep (dt) so small at 2.0e-6?
3) Why is your number of spatial grid points (N) only 10?
These are all VERY good questions, do you already know the answer? Here is a hint: it all has to do with stability!
You see, we have a relatively benign stability statement: $\Delta t \leq (\Delta x^{m})^{2}$, but still, it is a crutial aspect of our numerical analysis. This statement limits the size of our timestep, dt, and is the key reason why we chose our initial, constant, dt to be 2.0e-6 (lets not also forget the fact that this dt and N were also chosen by Caldwell and Savovic [5]) . Remember, because we are using a Central Difference scheme, our stability criteria essentially comes from the CFL condition:
$$\frac{\Delta t}{(\Delta x^{m})^{2}} \leq \sigma = 1$$.
The parameters dt and dx are not just arbitrarilty chosen. They determine the speed of your numerical solution. At all times you need the speed at which your solution progresses to be faster than the speed at which the problem (in this case, thermal diffusion) propagates. Afterall, how can you calculate the solution numerically when the solution is faster than the numerical analysis?!
Here is a pertinent question for you: Which do you determine first: dt, nt, N, or L? Does it matter which one you chose first?
The answer is normally, yes, we chose N and L which gives us dx, we then use a stability statement with dx (and $\alpha$) to limit dt, we then choose our nt to determine the elapsed time into our solution in the form of (nt*dt) seconds, easy right?
Well this is where we get to limitation #1 with the Variable Grid Method:
Because the end of our domain "L" is in actuality the position of our interface, s(t), and our interface at time t=0 is approximately at the origin (x=0.02), then if we want a constant number of spatial grids N (say N=10), our initial dx is extremely small!. Remember governing equation (F) says: $\Delta x^{m+1} = \frac{s_{m+1}}{N}$, well at t=0 (m=0) s[0] is 0.02, meaning our initial dx, dx[0] is: 0.002. Plug this back into our stability statement and you see that dt must be smaller than or equal to 4e-6! That is VERY harsh criterion for a timestep dt. In essence, with SUCH a small time step, we would need hundreds of thousands of time loops, nt, in order to get just 1 second of data. Now you see why nt is chosen to be 500,000 in order to get just 1 second of data. If I had wanted more time elapsed, we would have needed iterations in the millions. (This is also why the analytical solution was calculated only for 1 second, we wanted to be able to compare results at the end!)
Now limitation #2 of the Variable Grid Method: Because the interface starts at the origin, the initial spatial step is very small, and forces a very very small timestep, dt. Thus a very very large number of iterations is needed to get into the "Seconds" scale for simulations, this is a slow numerical process...
Not only this, but we have limitation #3 of the Variable Grid Method: If we want a longer dimensional domain, we need to choose a larger N. This will give us a final domain of $L=N*dx_{final}$, but if N is larger, then going back to our governing equation (F), $\Delta x^{m+1} = \frac{s_{m+1}}{N}$ this means that dx[0] is even smaller, which continues to force a much smaller dt. Thus for the Variable Grid Method, if you want a large domain, you need an even smaller timestep, which forces more time in your numerical calculation.
Let's test this out for ourselves, lets redo the VGM calculations but increase N to say, 14:
(The cell below is in Raw-NB Convert since we didn't want it to slow you down the first time you ran this program. Notice the N is now 14, go ahead and run this below cell and see what happens)
End of explanation
"""
dt=2.0e-6
nt=500000
N = 15
U_VGM15, s_VGM15, dx_VGM15 = VGM(nt, N, U0, dx, s, dt)
XX = numpy.linspace(0, 1, N)
Max_tempVGM15=max(U_VGM15)
pyplot.plot(XX,U_VGM15, color = '#654322', ls = '--', lw =4, label = 'VGM')
pyplot.plot(x, ExactS, color = '#003366', ls = '--', lw =1, label='Analytical')
pyplot.xlabel('Length of Domain')
pyplot.ylabel('Temperature')
pyplot.ylim(-.5, 2)
pyplot.xlim(0, 1.0)
pyplot.legend();
print('Time Step dt is:', dt)
print('Initial spatial step, dx[0]^2 is:', dx_VGM15[0]**2 )
print('Max error (@x=0) is:',abs(Max_tempVGM15-Max_tempA)*(100/Max_tempA),'Percent')
"""
Explanation: As you can see, our stability criteria is still met, since $\Delta t \leq (\Delta x_{0})^{2}$. Infact, with a 40% larger N we see that we have less error as well! But what if we set N=15?
(The cell below is in Raw-NB Convert since we didn't want it to slow you down the first time you ran this program. Notice the N is now 15, go ahead and run this below cell and see what happens THIS time)
End of explanation
"""
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file ='numericalmoocstyle.css'
HTML(open(css_file, "r").read())
"""
Explanation: Wow! That blew up! Just as we expect it would now that with N=15 our time step is larger than our initial spatial step!
Final Thoughts
And now we know, that the Stefan Problem, a boundary value PDE with a time-dependant moving boundary, CAN be solved. We have just demonstrated the solution for a 1D, Dimensionless Stefan Problem with specifically chosen (time-dependant) input heat flux in order to give us a simplified exact solution to compare with. You have also seen implementation of the Variable Grid Method, one of many ways inwhich one can numerical simulate the Stefan problem. But we have also seen the limitations, namely in small time-steps ($dt$), smaller numerical domain ($N\cdot dx_{f}$), and large number of iterations ($nt$) for just a small amount of analytical time (t=$nt \cdot dt$ seconds). Perhaps when it comes time for you to model the melting of the Polar Ice caps, you'll choose a more "expedient" method? Just make sure you get the answer BEFORE the caps melt...)
EXERCISE #1: Can you implement the Variable Grid Method discretization governing equations to give us a solution faser? Can it handle millions of time iterations without putting us to sleep?
EXERCISE #2: Can you write a stability condition statement that will maximize our time-step (dt) for a given N and still keep it constant?
References
Kutluay S., The numerical solution of one-phase classical Stefan problem, Journal of Computational and Applied Mathematics 81 (1997) 135-144
Javierre, E., A Comparison of Numeical Models for one-dimensional Stefan problems, Journal of Compuational and Applied Mathematics 192 (2006) 445-459
Vuik, C., "Some historical notes about the Stefan problem". Nieuw Archief voor Wiskunde, 4e serie 11 (2): 157-167 (1993)
Crowley, A. B., Numerical Solution of Stefan Problems, Brunel University, Department of Mathematics, TR/69 December 1976
Caldwell, J., Numerical Solution of Stefan Problem By Variable Space Grid Method and Boundary Immodbilisation Method, Jour. of Mathematical Sciences, Vol. 13 No.1 (2002) 67-79.
End of explanation
"""
|
geoscixyz/gpgLabs | notebooks/seismic/Seis_Refraction.ipynb | mit | plotWavelet();
"""
Explanation: Interpretation and data acquisition strategies of seismic refraction data
In the <a href="https://www.3ptscience.com/app/SeismicRefraction">3pt Science app</a>, you explored the expected arrival times for refractions and reflections from a two-layer over a half-space model.
In this notebook, we will use synthetic seismic data to examine the impact of survey parameters on the expected seismic data.
Source
In an ideal case, the source wavelet would be an impulse (ie. an instantaneous spike). However, in reality, the source energy is spread in space and in time (see the <a href="http://gpg.geosci.xyz/content/seismic/wave_basics.html#waves-and-rays">GPG: Waves and Rays</a>). The source wavelet used for these examples is shown below.
End of explanation
"""
fig, ax = plt.subplots(1, 3, figsize=(15,6))
ax[0].set_title('Expected Arrival Times')
ax[1].set_title('Clean Data')
ax[2].set_title('Noisy Data')
ax[0]=viewTXdiagram(x0=1., dx=8, v1=400., v2=1000., v3=1500., z1=5., z2=15., ax=ax[0])
ax[1]=plotWiggleTX(x0=1., dx=8, v1=400., v2=1000., v3=1500., z1=5., z2=15., ax=ax[1])
ax[2]=plotWiggleTX(x0=1., dx=8, v1=400., v2=1000., v3=1500., z1=5., z2=15., ax=ax[2], noise=True)
plt.show()
"""
Explanation: Data
Below, we show 3 plots:
- left: expected arrival times for the direct, refracted waves and reflection from the first layer
- center: clean data - the wavelet arriving at the expected arrival time. Each line represents what would be recorded by an ideal geophone.
- right: noisy data - clean data + random noise.
The model used is the same as is in the lab write-up:
- v1 = 400 m/s
- v2 = 1000 m/s
- v3 = 1500 m/s
- z1 = 5m (depth to layer 1)
- z2 = 15m (depth to layer 2)
End of explanation
"""
makeinteractSeisRefracSurvey()
"""
Explanation: Setup for the seismic refraction survey
Consider a shot gather for seismic refraction survey, which means we have one shot (source), and multiple receivers (12). Shot location is fixed at x=0. There are two survey parameters:
x0: offset between shot and the first geophone
dx: spacing between two consecutive geophones
In the widget below you can alter x0 or dx to change your survey setup. Run the next cell then try to change x0 and dx in the cell below that. Note that the next two cells are designed to help you visualize the survey layout. The x0 and dx parameter adjustment sliders here are not linked to the widget at the end of this notebook.
End of explanation
"""
makeinteractTXwigglediagram()
"""
Explanation: Interpretation of seismic refraction data
Assume that you have seismic refraction data. The structure of the earth is unknown and you may want to obtain useful information about the subsurface. We will assume that the subsurface in the survey area has a three-layer structure and that the velocities increase with depth.
Thus, there can be four unknowns:
v1: velocity of the first layer (m/s)
v2: velocity of the second layer (m/s)
v3: velocity of the third layer (m/s)
z1: depth of the first layer (m)
z2: depth of the second layer (m)
Based on the above information, we may expect to have up to four arrivals at a geophone, related to
Direct
Reflected: interface 1
Refraction: interface 1
Refraction: interface 2
The widget below will allow you to estimate the layer depths and velocities. The parameters for the widget are:
x0: offset between shot and the first geophone
dx: spacing between two consecutive geophones
Fit: checking this activates fittting function (if you click this red line will show up)
tI: intercept time for a line function (s)
v: inverse slope of the line function (m/s; which can be velocity of either direct and critically refracted wave)
Run below widget and find useful subsurface information!
End of explanation
"""
|
dougsweetser/ipq | q_notebooks/billiard_calculations.ipynb | apache-2.0 | %%capture
import Q_tools as qt;
Aq1=qt.Q8([1470000000,0,1.1421,0,1.4220,0,0,0])
Aq2=qt.Q8([1580000000,0,4.2966,0,0,0.3643,0,0])
q_scale = qt.Q8([2.2119,0,0,0,0,0,0,0], qtype="S")
Aq1s=Aq1.product(q_scale)
Aq2s=Aq2.product(q_scale)
print(Aq1s)
print(Aq2s)
"""
Explanation: Table of Contents
Observing Billiards Using Space-time Numbers
Representations of Numbers Versus Coordinate Transformation of Vectors
Observer B Boosted
Observer C in a Gravity Field in Theory
Observer C in a Gravity Field in Practice
Conclusions
Observing Billiards Using Space-time Numbers
The goal of this iPython notebook is to become familiar with using space-time numbers to describe events. This will be done for three different observers. The first case will cover the fact that the observers happen to be at different locations. How does one handle different ways to represent the numbers used to characterize events? One observer will be set in constant motion. We will work out the equivalence classes that cover observers in motion. The final case will look at equivalence classes that may happen due to gravity.
Here is an animation of a mini billiard shot.
The cue ball hits the 8 ball, then into the corner pocket it goes. Observer A is yellow, our proverbial reference observer. I promise to do nothing with her ever. Observer B in pink is at a slightly different location, but still watching from the tabletop. Eventually, he will be set into constant motion. We can see about what Observers agree and disagree about. Observer C is in purple and at the end of a pipe cleaner above the tabletop. His observations will be ever-so-slightly different from Observer A due to the effects of gravity and that will be investigated.
A number of simplifications will be done for this analysis. All but two frames will be used.
Get rid of the green felt. In its place, put some graph paper. Add a few markers to make any measurement more precise.
The image was then printed out so a precise dial caliper could be used to make measurements. Notice that observer A is ~2.5 squares to the left and 3+ squares below the 8 ball in the first frame.
Can the time be measured precisely? In this case, I will use the frames of the gif animation as a proxy for measuring time. I used the command "convert billiard_video.gif Frames/billiards_1%02d.png" to make make individual frames from the gif. The two frames are 147 and 158. The speed of the fastest cue break is over 30 miles per hour, or as a dimensionless relativistic speed is 4.5x10<sup>-8</sup>. If small number are used for the differences in space, then the difference between time should be scaled to be in the ten billion range. So that is what I did: call the first time 1,470,000,000 and the second one 1,580,000,000. The ball is then moving around 20mph. I could have found out the frames per second, and calculated the correct speed from there. The three observers do not need to coordinate to figure out the same origin in time, so I chose B and C to start a billion and two billion earlier respectively.
This explains how I got numbers related to an 8 ball moving on a table. Now to start calculating with the goal of getting the square. I have written a test of tools called "Q_tool_devo" that allow for numerical manipulations of something I call "space-time numbers". Essentially they are quaternions, a 4D division algebra, written in a funny way. Instead of writing a real number like 5.0, a doublet of values is used, say (6.0, 1.0) which can then be "reduced" to (5.0, 0) and is thus equivalent to the standard real number 5.0. To create a space-time number, feed it eight numbers like so:
End of explanation
"""
Adq=Aq2s.dif(Aq1s).reduce()
print(Aq2s.dif(Aq1s))
print(Adq)
"""
Explanation: When scaled, the expected values are seen, the x value at around 2.5, the y value above 3 and zero for z. Event 2 is 9.5 and 0.8 $j_3$ meaning in real numbers, -0.8. There is also the qtype "QxS", a way of keeping track of what operations have been done to a space-time number. After all, all space-time numbers look the same. Keeping the qtype around help avoid combining differing qtypes.
Calculate the delta quaternion between events one and two:
End of explanation
"""
Adq2=Adq.square()
print(Adq2)
print(Adq2.reduce())
"""
Explanation: The difference is nearly 7 in the x<sub>1</sub> direction, and 4 in the j<sub>3</sub>, which if real numbers were being used would be the positive x and negative y. The qtype "QxQ-QxQ.reduce" shows that both initial components were multiplied by a scalar value, the difference taken, then reduced to its real number equivalent form.
Distances are found using a square.
End of explanation
"""
Bq1=qt.Q8([2470000000,0,0.8869,0,1.8700,0,0,0])
Bq2=qt.Q8([2580000000,0,3.9481,0,0,0.1064,0,0])
Bq1s=Bq1.product(q_scale)
Bq2s=Bq2.product(q_scale)
Bdq=Bq2s.dif(Bq1s).reduce()
Cq1=qt.Q8([3470000000,0,1.1421,0,1.4220,0,1.3256,0])
Cq2=qt.Q8([3580000000,0,4.2966,0,0,0.3643,1.3256,0])
Cq1s=Cq1.product(q_scale)
Cq2s=Cq2.product(q_scale)
Cdq=Cq2s.dif(Cq1s).reduce()
print(Bq1s)
print(Bq2s)
print(Bdq)
print(Cq1s)
print(Cq2s)
print(Cdq)
"""
Explanation: This is a case where the non-reduced form is more convenient. The time squared is about 60 quadrillion while the change in space squared is slightly over 64. Classical physics is full of such imbalances and the non-reduced form helps maintain the separation.
It is my thesis that all the numbers in the square provide important information for comparing any pair of observers. Here are the input numbers for observers B and C:
End of explanation
"""
Bdq2=Bq1s.dif(Bq2s).reduce().square()
Cdq2=Cq1s.dif(Cq2s).reduce().square()
print(Adq2)
print(Bdq2)
print(Cdq2)
"""
Explanation: No set of input numbers for two observers are ever the same. Two observers must be located in either a different place in time or a different place in space or both.
End of explanation
"""
(64.96 - 64.30)/64.60
"""
Explanation: We are comparing apples to apples since the qtype, "QxS-QxS.reduce.sq", are the same. The first of the 8 terms are exactly the same, the I<sub>0</sub>. The reason is the delta time values were exactly the same. The first and third I<sub>2</sub> are exactly the same because their delta values were identical even though they had different z values. A different physical measurement was made for Observer B. The match is pretty good:
End of explanation
"""
BRotq1=qt.Q8([2470000000,0,0.519,0,1.9440,0,0,0])
BRotq2=qt.Q8([2580000000,0,3.9114,0,0.5492,0,0,0])
BRotdq2=BRotq1.product(q_scale).dif(BRotq2.product(q_scale)).reduce().square()
print(BRotdq2)
print(Bdq2)
"""
Explanation: The error is about a percent. So while I reported 4 significant digits, only the first two can be trusted.
The next experiment involved rotating the graph paper for Observer B. This should not change much other than the numbers that get plugged into the interval calculation.
End of explanation
"""
print(Adq2.norm_squared_of_vector().reduce())
print(Bdq2.norm_squared_of_vector().reduce())
print(Cdq2.norm_squared_of_vector().reduce())
print(BRotdq2.norm_squared_of_vector().reduce())
"""
Explanation: No surprise here: the graph paper will make a difference in the numbers used, but the distance is the same up to the errors made in the measuring process.
The Space-times-time term
What happens with the space-times-time term for these observers that have no relative velocities to each other? The space part always points in a different direction since the spatial origin is in a different location. If we consider the norm squared of the the space-times-time term, that would be $dt^2(dx^2 + dy^2 + dz^2)$. This is something observers with different perspectives will agree upon:
End of explanation
"""
import math
def cyl_2_cart(q1):
"""Convert a measurment made with cylindrical coordinates in angles to Cartesian cooridantes."""
t = q1.dt.p - q1.dt.n
r = q1.dx.p - q1.dx.n
a = q1.dy.p - q1.dy.n
h = q1.dz.p - q1.dz.n
x = r * math.cos(a * math.pi / 180)
y = r * math.sin(a * math.pi / 180)
return qt.Q8([t, x, y, h])
"""
Explanation: These are the same within the margin of error of the measurements.
Representations of Numbers Versus Coordinate Transformation of Vectors
This notebook is focused on space-time numbers that can be added, subtracted, multiplied, and divided. Formally, they are rank 0 tensors. Yet because space-time numbers have four slots to fill, it is quite easy to mistakenly view them as a four dimensional vector space over the mathematical field of real numbers with four basis vectors. Different representations of numbers changes the values of the numbers that get used, but not their meaning. Let's see this in action for a cylindrical representation of a number. Instead of $x$ and $y$, one uses $R \cos(\alpha)$ and $R \sin(\alpha)$, no change for $z$.
What needs to be done with the measurements done in cylindrical coordinates is to convert them to Cartesian, the proceed with the same calculations.
End of explanation
"""
BPolarq1=cyl_2_cart(qt.Q8([2470000000,0,2.0215,0, 68.0,0,0,0]))
BPolarq2=cyl_2_cart(qt.Q8([2580000000,0,3.9414,0,1.2,0,0,0]))
BPolardq2=BPolarq1.product(q_scale).dif(BPolarq2.product(q_scale)).reduce().square()
print(BPolardq2)
print(Bdq2)
"""
Explanation: For polar coordinates, measure directly the distance between the origin and the billiard ball. Then determine an angle. This constitutes a different approach to making a measurement.
End of explanation
"""
vx = 2/Bdq.dt.p
print(vx)
"""
Explanation: Yet the result for the interval is the same: the positive time squared term is exactly the same since those numbers were not changed, and the negative numbers for the space terms were only different to the error in measurement.
Observer B Boosted
Give Observer B a Lorenz boost. All that is needed is to relocate Observer B in the second frame like so:
To make the math simpler, presume all the motion is along $x$, not the slightest wiggle along $y$ or $z$. Constant motion between the frames shown is also presumed.
What velocity is involved? THat would be the change in space, 2, over the time, a big number
End of explanation
"""
Bdq_boosted = Bdq.boost(beta_x = vx)
print(Bdq_boosted)
print(Bdq_boosted.reduce())
print(Bdq)
print(Bdq_boosted.dif(Bdq).reduce())
"""
Explanation: This feels about right. The speed of observer B is about what a cube ball is.
Boost the delta by this velocity.
End of explanation
"""
print(Bdq_boosted.square())
print(Bdq.square())
"""
Explanation: The last line indicates there is no difference between the boosted values of $y$ and $z$, as expected. Both the change in time and in space are negative. Moving in unison is a quality of simple boosts. The change in time is tiny. The change in space is almost 4, but not quite due to the work of the $\gamma$ factor that altered the time measurement.
Compare the squares of the boosted with the non-boosted Observer B
End of explanation
"""
print(Bdq_boosted.square().reduce())
print(Bdq.square().reduce())
"""
Explanation: Time and space are mixing together for the boosted frame. There are two huge numbers for $I_0$ and $I_2$ instead of a big number and about 65. Are they the same? Compare the reduced squares:
End of explanation
"""
qb = qt.EQ(Bdq, Bdq_boosted)
print(qb)
"""
Explanation: The reduced intervals are the same. The space-times-time terms are not. The difference between the space-times-time terms can be used to determine how Observer B boosted in moving relative to Observer B (calculation not done here). Even with out going into detail, the motion is only along x because that is the only term that changes.
Software was written to systematically look at equivalences classes for a pair of quaternions. Three types of comparisons are made: linear, squared, and the norm.
End of explanation
"""
qb.visualize()
"""
Explanation: There are 9 equivalences classes in all. Let's visualize them a set of icons:
End of explanation
"""
from sympy import symbols
M = symbols('M')
(1/(1 - 2 * M)).series(M, 0, n=5)
"""
Explanation: The figures in gray are location in time and 3 for space. The colorful set with parabolas are the squares, the interval being purple and off-yellow, space-times-time in green. The norm is in pink.
For the gray set, the events from Observer B are being compared with a boosted Observer B for motion that is only along the $x$ direction. We thus expect the $y$ and $z$ values to be exact as they are (down exact and here exact because $z=0$). The value of $x$ is boosted, so they are both right, but not the same value. But what about time? The report is for an exact match. The software was written to say two values were equivalent if they were the same to 10 significant digits. It is the 16th significant digit which is different.
The time-like interval is the same for Observer B and the boosted one, so the equivalence class is time-like-exact as expected. This graphics are icons to represent the class, not a reflection of the numbers used. The space-times-time terms are only different along $t x$ due to the boost along $x$.
The norm is included for completeness of simple operations, but I don't understand it at this time. It is marked as exact due to the dominance of the time term.
Observer C in a Gravity Field in Theory
The video of the billiard balls shows there is a gravity field since the eight-ball drops into the pocket. Newton's law of gravity can be written as an interval:
$$d \tau^2 = \left(1 - 2\frac{G M}{c^2 R}\right) dt^2 - dR^2/c^2 $$
More precise measurements of weak field gravity adds a few more terms (essentially equation 40.1 of Misner, Thorne and Wheeler):
$$d \tau^2 = \left(1 - 2\frac{G M}{c^2 R} + 2 \left(\frac{G M}{c^2 R}\right)^2\right) dt^2 - \left(1 + 2\frac{G M}{c^2 R}\right) dR^2 /c^2 $$
When the mass $M$ goes to zero or the distance from the source gets large, the result is the interval expected in flat space-time.
The space-times-times equivalence class as gravity proposal stipulates that for a simple gravitational source mass (spherically symmetric, non-rotating, uncharged) the square of a delta quaternion produces a space-times-time that is the same for different observers no matter where they are in a gravitational field. This can be achieved by making the factor for time be the inverse of the one for space (below, a dimensionless M is a stand-in for $\frac{G M}{c^2 R}$).
End of explanation
"""
(1/(1 - 2 * M + 2 * M ** 2)).series(M, 0, n=3)
"""
Explanation: Even in the "classical realm", the space-times-time equivalence class as gravity proposal is different from Newtonian gravity. From my brief study of the rotation of thin disk galaxies, this term is not applied to such calculations. This now strikes me as odd. The Schwarzschild solution has this same term, the "first order in M/R", yet only the dt correction is used in practice. The rotation profile calculation is quite complex, needing elliptical integrals. An analytic solution like that would be altered by this well know term. It will be interesting in time to explore if the extra term has consequences.
Since we are analyzing the square, the delta quaternion would be the square root of with these two terms that use the dimensionless gravitational length:
$$ \begin{align} dq &= \left(\sqrt{1 - 2 \frac{G M}{c^2 R}} dt, \frac{1}{\sqrt{1 - 2 \frac{G M}{c^2 R}}} dR/c \right) \ dq^2 &= \left( \left(1 - 2 \frac{G M}{c^2 R}\right) dt^2 - \left(1 + 2 \frac{G M}{c^2 R} + O(2)\right) dR^2/c^2, 2 ~dt ~dR/c \right) \
&= \left( d\tau^2, 2 ~dt ~dR/c \right) \end{align} $$
To be consistent with the weak gravity field tests and the algebraic constraints of the equivalence class proposal requires six terms not five:
End of explanation
"""
GMc2R_for_Observer_A = 6.67384e-11 * 5.9722e+24 / (299792458 ** 2 * 6371000)
GMc2R_for_Observer_C = 6.67384e-11 * 5.9722e+24 / (299792458 ** 2 * 6371000.1)
print(GMc2R_for_Observer_A)
print(GMc2R_for_Observer_C)
"""
Explanation: Here are the delta quaternion and its square in a gravity field that will be consistent with all weak field gravitational tests.
$$ \begin{align} dq &= \left(\sqrt{1 - 2 \frac{G M}{c^2 R} + 2 \left(\frac{G M}{c^2 R}\right)^2} dt, \frac{1}{\sqrt{1 - 2 \frac{G M}{c^2 R} + 2 \left(\frac{G M}{c^2 R}\right)^2}} dR/c \right) \ dq^2 &= \left( \left(1 - 2 \frac{G M}{c^2 R} + 2 \left(\frac{G M}{c^2 R}\right)^2\right) dt^2 - \left(1 + 2 \frac{G M}{c^2 R} + 2 \left(\frac{G M}{c^2 R}\right)^2+O(3)\right) dR^2/c^2, 2 ~dt ~dR/c \right) \
&= \left( d\tau^2, 2 ~dt ~dR/c \right) \end{align} $$
The second order term for $ dR^2 $ has consequences that are tricky to discuss. Notice that no mention has been made of metric, not field equations, nor covariant and contra-variant vectors. That is because numbers are tensors of rank 0 that are equipped with rules of multiplication and division. As discussed above, there are different representations of numbers like a Cartesian representation, a cylindrical representation, and a spherical representation. My default is to use the Cartesian representation because I find it simplest to manage.
The most successful theory for gravity, general relativity, does using metrics, covariant and contra-variant tensors, as well as connections that reveal how a metric changes in space-time. There are a great many technical choices in this process which have consequences. Einstein worked with a torsion-free connection that was metric compatible. One consequence is that dealing with fermions is an open puzzle. The process of getting to an interval is not simple. Twenty non-linear equation equations must be solved. This can be done analytically for only the simplest of cases. It is such a case, the Schwarzschild solution, that makes up most of the tests of general relativity (eq. 40.1 from MTW written above in isotrophic coordinates).
The reader is being asked to compare Einstein's apple of an interval to the first of four oranges. There is no overlap between the mechanics of the math, hence the apple versus orange. The forms of the expressions are the same: a Taylor series in a dimensionless gravitational length. Five of the coefficients of the Taylor series are identical. Those five coefficients have been tested in a wide variety of classical tests of weak gravitational fields.
The sixth term is not the same for the Taylor series expansion of the Schwarzschild solution in either isotrophic or Schwarzschild coordinates. It is not reasonable to expect the simple space-times-time equivalence constraint will solve the non-linear Einstein field equations.
The truncated series expansion will not be the final story. We could wait for experimentalist to determine 10 terms, but that is quite unreasonable (Personal story: I spend ~$600 to go to an Eastern Gravity Meeting just to ask Prof. Clifford Will when we might get the terms for second order Parameterize Post-Newtonian accuracy, and at the time he knew of no such planned experimental effort ~2005?). Given that gravity is o harmonic phenomena, there are six terms that match, and many other people have made the same speculation, it is a small leap to suggest that a positive and negative exponential to the dimensionless mass length may be the complete solution for simple systems:
$$ \begin{align} dq &= \left(\exp\left({-\frac{G M}{c^2 R}}\right) dt, \exp\left(\frac{G M}{c^2 R} \right) dR/c \right) \ dq^2 &= \left( \exp \left(-2\frac{G M}{c^2 R} \right) dt^2 - \exp \left( 2 \frac{G M}{c^2 R} \right) dR^2/c^2, 2 ~dt ~dR/c \right) \
&= \left( d\tau^2, 2 ~dt ~dR/c \right) \end{align} $$
The exponential interval does appear in the literature since it makes calculations far simpler.
Observer C in a Gravity Field in Practice
Gravity is impressively weak. The distance of Observer C over Observer A is impressively small. The change in the interval should in practice be beyond measure.
\begin{align}
G&=6.67384\cdot 10^{-11} \frac{m^{3}}{kg s^2}\
M&=5.9722 \cdot 10^{24} kg\
c&=299792458 m / s \
R&=6.371 \cdot 10^{6} m
\end{align}
End of explanation
"""
Adq_g = Cdq.g_shift(GMc2R_for_Observer_A, g_form="minimal")
Cdq_g = Cdq.g_shift(GMc2R_for_Observer_C, g_form="minimal")
print(Adq_g)
print(Cdq_g)
"""
Explanation: Moving 10 centimeters is not much.
Do the "minimal" shift meaning the three terms of the Taylor series.
End of explanation
"""
Cdq_g_zero = Cdq.g_shift(0, g_form="minimal")
print(Adq_g)
print(Cdq_g_zero)
"""
Explanation: The squares could be calculated, but if the input values are the same, there will be no difference in any of the squares. This is consistent with expectations: a small number change in a small number cannot be noticed.
Observer C is a mere 10 centimeters away from Observer A. Let us make the distance so vast that the GMc2R value is zero.
End of explanation
"""
Adq_g_2 = Adq_g.square()
Cdq_g_zero_2 = Cdq_g_zero.square()
eq_g = qt.EQ(Adq_g_2, Cdq_g_zero_2)
print(eq_g)
eq_g.visualize()
"""
Explanation: Get far enough away, and the effects of gravity may become apparent.
End of explanation
"""
|
laurent-george/tutmom | intro.ipynb | bsd-3-clause | import numpy as np
objective = np.poly1d([1.3, 4.0, 0.6])
print objective
"""
Explanation: Introduction to optimization
The basic components
The objective function (also called the 'cost' function)
End of explanation
"""
import scipy.optimize as opt
x_ = opt.fmin(objective, [3])
print "solved: x={}".format(x_)
%matplotlib inline
x = np.linspace(-4,1,101.)
import matplotlib.pylab as mpl
mpl.plot(x, objective(x))
mpl.plot(x_, objective(x_), 'ro')
"""
Explanation: The "optimizer"
End of explanation
"""
import scipy.special as ss
import scipy.optimize as opt
import numpy as np
import matplotlib.pylab as mpl
x = np.linspace(2, 7, 200)
# 1st order Bessel
j1x = ss.j1(x)
mpl.plot(x, j1x)
# use scipy.optimize's more modern "results object" interface
result = opt.minimize_scalar(ss.j1, method="bounded", bounds=[2, 4])
j1_min = ss.j1(result.x)
mpl.plot(result.x, j1_min,'ro')
"""
Explanation: Additional components
"Box" constraints
End of explanation
"""
import mystic.models as models
print(models.rosen.__doc__)
import mystic
mystic.model_plotter(mystic.models.rosen, fill=True, depth=True, scale=1, bounds="-3:3:.1, -1:5:.1, 1")
import scipy.optimize as opt
import numpy as np
# initial guess
x0 = [1.3, 1.6, -0.5, -1.8, 0.8]
result = opt.minimize(opt.rosen, x0)
print result.x
# number of function evaluations
print result.nfev
# again, but this time provide the derivative
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print result.x
# number of function evaluations and derivative evaluations
print result.nfev, result.njev
print ''
# however, note for a different x0...
for i in range(5):
x0 = np.random.randint(-20,20,5)
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print "{} @ {} evals".format(result.x, result.nfev)
"""
Explanation: The gradient and/or hessian
End of explanation
"""
# http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#tutorial-sqlsp
'''
Maximize: f(x) = 2*x0*x1 + 2*x0 - x0**2 - 2*x1**2
Subject to: x0**3 - x1 == 0
x1 >= 1
'''
import numpy as np
def objective(x, sign=1.0):
return sign*(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
def derivative(x, sign=1.0):
dfdx0 = sign*(-2*x[0] + 2*x[1] + 2)
dfdx1 = sign*(2*x[0] - 4*x[1])
return np.array([ dfdx0, dfdx1 ])
# unconstrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,),
jac=derivative, method='SLSQP', options={'disp': True})
print("unconstrained: {}".format(result.x))
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - 1]),
'jac' : lambda x: np.array([0.0, 1.0])})
# constrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,), jac=derivative,
constraints=cons, method='SLSQP', options={'disp': True})
print("constrained: {}".format(result.x))
"""
Explanation: The penalty functions
$\psi(x) = f(x) + k*p(x)$
End of explanation
"""
import scipy.optimize as opt
# constrained: linear (i.e. A*x + b)
print opt.cobyla.fmin_cobyla
print opt.linprog
# constrained: quadratic programming (i.e. up to x**2)
print opt.fmin_slsqp
# http://cvxopt.org/examples/tutorial/lp.html
'''
minimize: f = 2*x0 + x1
subject to:
-x0 + x1 <= 1
x0 + x1 >= 2
x1 >= 0
x0 - 2*x1 <= 4
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
A = cvx.matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])
b = cvx.matrix([ 1.0, -2.0, 0.0, 4.0 ])
cost = cvx.matrix([ 2.0, 1.0 ])
sol = cvx_solvers.lp(cost, A, b)
print(sol['x'])
# http://cvxopt.org/examples/tutorial/qp.html
'''
minimize: f = 2*x1**2 + x2**2 + x1*x2 + x1 + x2
subject to:
x1 >= 0
x2 >= 0
x1 + x2 == 1
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
Q = 2*cvx.matrix([ [2, .5], [.5, 1] ])
p = cvx.matrix([1.0, 1.0])
G = cvx.matrix([[-1.0,0.0],[0.0,-1.0]])
h = cvx.matrix([0.0,0.0])
A = cvx.matrix([1.0, 1.0], (1,2))
b = cvx.matrix(1.0)
sol = cvx_solvers.qp(Q, p, G, h, A, b)
print(sol['x'])
"""
Explanation: Optimizer classifications
Constrained versus unconstrained (and importantly LP and QP)
End of explanation
"""
import scipy.optimize as opt
# probabilstic solvers, that use random hopping/mutations
print opt.differential_evolution
print opt.basinhopping
print opt.anneal
import scipy.optimize as opt
# bounds instead of an initial guess
bounds = [(-10., 10)]*5
for i in range(10):
result = opt.differential_evolution(opt.rosen, bounds)
print result.x,
# number of function evaluations
print '@ {} evals'.format(result.nfev)
"""
Explanation: Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took. scipy.optimize has a version of this, with options={'retall':True}, which returns the solver trajectory.
Local versus global
End of explanation
"""
import scipy.optimize as opt
import scipy.stats as stats
import numpy as np
# Define the function to fit.
def function(x, a, b, f, phi):
result = a * np.exp(-b * np.sin(f * x + phi))
return result
# Create a noisy data set around the actual parameters
true_params = [3, 2, 1, np.pi/4]
print "target parameters: {}".format(true_params)
x = np.linspace(0, 2*np.pi, 25)
exact = function(x, *true_params)
noisy = exact + 0.3*stats.norm.rvs(size=len(x))
# Use curve_fit to estimate the function parameters from the noisy data.
initial_guess = [1,1,1,1]
estimated_params, err_est = opt.curve_fit(function, x, noisy, p0=initial_guess)
print "solved parameters: {}".format(estimated_params)
# err_est is an estimate of the covariance matrix of the estimates
print "covarance: {}".format(err_est.diagonal())
import matplotlib.pylab as mpl
mpl.plot(x, noisy, 'ro')
mpl.plot(x, function(x, *estimated_params))
"""
Explanation: Gradient descent and steepest descent
Genetic and stochastic
Not covered: other exotic types
Other important special cases:
Least-squares fitting
End of explanation
"""
import numpy as np
import scipy.optimize as opt
def system(x,a,b,c):
x0, x1, x2 = x
eqs= [
3 * x0 - np.cos(x1*x2) + a, # == 0
x0**2 - 81*(x1+0.1)**2 + np.sin(x2) + b, # == 0
np.exp(-x0*x1) + 20*x2 + c # == 0
]
return eqs
# coefficients
a = -0.5
b = 1.06
c = (10 * np.pi - 3.0) / 3
# initial guess
x0 = [0.1, 0.1, -0.1]
# Solve the system of non-linear equations.
result = opt.root(system, x0, args=(a, b, c))
print "root:", result.x
print "solution:", result.fun
"""
Explanation: Not Covered: integer programming
Typical uses
Function minimization
Data fitting
Root finding
End of explanation
"""
import numpy as np
import scipy.stats as stats
# Create clean data.
x = np.linspace(0, 4.0, 100)
y = 1.5 * np.exp(-0.2 * x) + 0.3
# Add a bit of noise.
noise = 0.1 * stats.norm.rvs(size=100)
noisy_y = y + noise
# Fit noisy data with a linear model.
linear_coef = np.polyfit(x, noisy_y, 1)
linear_poly = np.poly1d(linear_coef)
linear_y = linear_poly(x)
# Fit noisy data with a quadratic model.
quad_coef = np.polyfit(x, noisy_y, 2)
quad_poly = np.poly1d(quad_coef)
quad_y = quad_poly(x)
import matplotlib.pylab as mpl
mpl.plot(x, noisy_y, 'ro')
mpl.plot(x, linear_y)
mpl.plot(x, quad_y)
#mpl.plot(x, y)
"""
Explanation: Parameter estimation
End of explanation
"""
import mystic.models as models
print models.zimmermann.__doc__
"""
Explanation: Standard diagnostic tools
Eyeball the plotted solution against the objective
Run several times and take the best result
Log of intermediate results, per iteration
Rare: look at the covariance matrix
Issue: how can you really be sure you have the results you were looking for?
EXERCISE: Use any of the solvers we've seen thus far to find the minimum of the zimmermann function (i.e. use mystic.models.zimmermann as the objective). Use the bounds suggested below, if your choice of solver allows it.
End of explanation
"""
|
CoreSecurity/pysap | docs/protocols/SAPRouter.ipynb | gpl-2.0 | from pysap.SAPRouter import *
from IPython.display import display
"""
Explanation: SAP Router
The following subsections show a graphical representation of the main protocol packets and how to generate them.
First we need to perform some setup to import the packet classes:
End of explanation
"""
for command in router_adm_commands:
p = SAPRouter(type=SAPRouter.SAPROUTER_ADMIN, adm_command=command)
print(router_adm_commands[command])
display(p.canvas_dump())
"""
Explanation: SAP Router Admin packets
End of explanation
"""
for opcode in router_control_opcodes:
p = SAPRouter(type=SAPRouter.SAPROUTER_CONTROL, opcode=opcode)
if opcode in [70, 71]:
p.snc_frame = ""
print(router_control_opcodes[opcode])
display(p.canvas_dump())
"""
Explanation: SAP Router Error Information / Control packets
End of explanation
"""
router_string = [SAPRouterRouteHop(hostname="8.8.8.8", port=3299),
SAPRouterRouteHop(hostname="10.0.0.1", port=3200, password="S3cr3t")]
router_string_lens = map(len, map(str, router_string))
p = SAPRouter(type=SAPRouter.SAPROUTER_ROUTE,
route_entries=len(router_string),
route_talk_mode=1,
route_rest_nodes=1,
route_length=sum(router_string_lens),
route_offset=router_string_lens[0],
route_string=router_string)
display(p.canvas_dump())
for x in router_string:
display(x.canvas_dump())
"""
Explanation: SAP Router Route packet
End of explanation
"""
p = SAPRouter(type=SAPRouter.SAPROUTER_PONG)
p.canvas_dump()
"""
Explanation: SAP Router Pong packet
End of explanation
"""
|
JuBra/cobrapy | documentation_builder/building_model.ipynb | lgpl-2.1 | from cobra import Model, Reaction, Metabolite
# Best practise: SBML compliant IDs
cobra_model = Model('example_cobra_model')
reaction = Reaction('3OAS140')
reaction.name = '3 oxoacyl acyl carrier protein synthase n C140 '
reaction.subsystem = 'Cell Envelope Biosynthesis'
reaction.lower_bound = 0. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.objective_coefficient = 0. # this is the default
"""
Explanation: Building a Model
This simple example demonstrates how to create a model, create a reaction, and then add the reaction to the model.
We'll use the '3OAS140' reaction from the STM_1.0 model:
1.0 malACP[c] + 1.0 h[c] + 1.0 ddcaACP[c] $\rightarrow$ 1.0 co2[c] + 1.0 ACP[c] + 1.0 3omrsACP[c]
First, create the model and reaction.
End of explanation
"""
ACP_c = Metabolite(
'ACP_c',
formula='C11H21N2O7PRS',
name='acyl-carrier-protein',
compartment='c')
omrsACP_c = Metabolite(
'3omrsACP_c',
formula='C25H45N2O9PRS',
name='3-Oxotetradecanoyl-acyl-carrier-protein',
compartment='c')
co2_c = Metabolite(
'co2_c',
formula='CO2',
name='CO2',
compartment='c')
malACP_c = Metabolite(
'malACP_c',
formula='C14H22N2O10PRS',
name='Malonyl-acyl-carrier-protein',
compartment='c')
h_c = Metabolite(
'h_c',
formula='H',
name='H',
compartment='c')
ddcaACP_c = Metabolite(
'ddcaACP_c',
formula='C23H43N2O8PRS',
name='Dodecanoyl-ACP-n-C120ACP',
compartment='c')
"""
Explanation: We need to create metabolites as well. If we were using an existing model, we could use get_by_id to get the apporpriate Metabolite objects instead.
End of explanation
"""
reaction.add_metabolites({malACP_c: -1.0,
h_c: -1.0,
ddcaACP_c: -1.0,
co2_c: 1.0,
ACP_c: 1.0,
omrsACP_c: 1.0})
reaction.reaction # This gives a string representation of the reaction
"""
Explanation: Adding metabolites to a reaction requires using a dictionary of the metabolites and their stoichiometric coefficients. A group of metabolites can be added all at once, or they can be added one at a time.
End of explanation
"""
reaction.gene_reaction_rule = '( STM2378 or STM1197 )'
reaction.genes
"""
Explanation: The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9):1290-307. We will assign the gene reaction rule string, which will automatically create the corresponding gene objects.
End of explanation
"""
print('%i reactions initially' % len(cobra_model.reactions))
print('%i metabolites initially' % len(cobra_model.metabolites))
print('%i genes initially' % len(cobra_model.genes))
"""
Explanation: At this point in time, the model is still empty
End of explanation
"""
cobra_model.add_reaction(reaction)
# Now there are things in the model
print('%i reaction' % len(cobra_model.reactions))
print('%i metabolites' % len(cobra_model.metabolites))
print('%i genes' % len(cobra_model.genes))
"""
Explanation: We will add the reaction to the model, which will also add all associated metabolites and genes
End of explanation
"""
# Iterate through the the objects in the model
print("Reactions")
print("---------")
for x in cobra_model.reactions:
print("%s : %s" % (x.id, x.reaction))
print("")
print("Metabolites")
print("-----------")
for x in cobra_model.metabolites:
print('%9s : %s' % (x.id, x.formula))
print("")
print("Genes")
print("-----")
for x in cobra_model.genes:
associated_ids = (i.id for i in x.reactions)
print("%s is associated with reactions: %s" %
(x.id, "{" + ", ".join(associated_ids) + "}"))
"""
Explanation: We can iterate through the model objects to observe the contents
End of explanation
"""
|
Tsiems/machine-learning-projects | Lab1/Lab1_Playground.ipynb | mit | import pandas as pd
import numpy as np
df = pd.read_csv('data/data.csv') # read in the csv file
"""
Explanation: Data Loading and Preprocessing
To begin, we load the data into a Pandas data frame from a csv file.
End of explanation
"""
df.info()
df.head()
"""
Explanation: Let's take a cursory glance at the data to see what we're working with.
End of explanation
"""
columns_to_delete = ['Unnamed: 0', 'Date', 'time', 'TimeUnder',
'PosTeamScore', 'PassAttempt', 'RushAttempt',
'DefTeamScore', 'Season', 'PlayAttempted']
#Iterate through and delete the columns we don't want
for col in columns_to_delete:
if col in df:
del df[col]
"""
Explanation: There's a lot of data that we don't care about. For example, 'PassAttempt' is a binary attribute, but there's also an attribute called 'PlayType' which is set to 'Pass' for a passing play.
We define a list of the columns which we're not interested in, and then we delete them
End of explanation
"""
df.columns
"""
Explanation: We can then grab a list of the remaining column names
End of explanation
"""
df = df.replace(to_replace=np.nan,value=-1)
"""
Explanation: Temporary simple data replacement so that we can cast to integers (instead of objects)
End of explanation
"""
df.info()
"""
Explanation: At this point, lots of things are encoded as objects, or with excesively large data types
End of explanation
"""
continuous_features = ['TimeSecs', 'PlayTimeDiff', 'yrdln', 'yrdline100',
'ydstogo', 'ydsnet', 'Yards.Gained', 'Penalty.Yards',
'ScoreDiff', 'AbsScoreDiff']
ordinal_features = ['Drive', 'qtr', 'down']
binary_features = ['GoalToGo', 'FirstDown','sp', 'Touchdown', 'Safety', 'Fumble']
categorical_features = df.columns.difference(continuous_features).difference(ordinal_features)
"""
Explanation: We define four lists based on the types of features we're using.
Binary features are separated from the other categorical features so that they can be stored in less space
End of explanation
"""
df[continuous_features] = df[continuous_features].astype(np.float64)
df[ordinal_features] = df[ordinal_features].astype(np.int64)
df[binary_features] = df[binary_features].astype(np.int8)
"""
Explanation: We then cast all of the columns to the appropriate underlying data types
End of explanation
"""
df['PassOutcome'].replace(['Complete', 'Incomplete Pass'], [1, 0], inplace=True)
"""
Explanation: THIS IS SOME MORE REFORMATTING SHIT I'M DOING FOR NOW. PROLLY GONNA KEEP IT
End of explanation
"""
df.info()
"""
Explanation: Now all of the objects are encoded the way we'd like them to be
End of explanation
"""
df.describe()
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
#Embed figures in the Jupyter Notebook
%matplotlib inline
#Use GGPlot style for matplotlib
plt.style.use('ggplot')
pass_plays = df[df['PlayType'] == "Pass"]
pass_plays_grouped = pass_plays.groupby(by=['Passer'])
"""
Explanation: Now we can start to take a look at what's in each of our columns
End of explanation
"""
completion_rate = pass_plays_grouped.PassOutcome.sum() / pass_plays_grouped.PassOutcome.count()
completion_rate_sampled = completion_rate.sample(10)
completion_rate_sampled.sort_values(inplace=True)
completion_rate_sampled.plot(kind='barh')
"""
Explanation: We can take a random sample of passers and show their completion rate:
End of explanation
"""
pass_plays_grouped = pass_plays.groupby(by=['Passer', 'Receiver'])
completion_rate = pass_plays_grouped.PassOutcome.sum() / pass_plays_grouped.PassOutcome.count()
completion_rate_sampled = completion_rate.sample(10)
completion_rate_sampled.sort_values(inplace=True)
completion_rate_sampled.plot(kind='barh')
"""
Explanation: We can also group by both passer and receiver, to check for highly effective QB-Receiver combos.
End of explanation
"""
pass_plays_grouped_filtered = pass_plays_grouped.filter(lambda g: len(g)>10).groupby(by=['Passer', 'Receiver'])
completion_rate = pass_plays_grouped_filtered.PassOutcome.sum() / pass_plays_grouped_filtered.PassOutcome.count()
completion_rate_sampled = completion_rate.sample(10)
completion_rate_sampled.sort_values(inplace=True)
completion_rate_sampled.plot(kind='barh')
"""
Explanation: We can eliminate combos who didn't have at least 10 receptions together, and then re-sample the data. This will remove noise from QB-receiver combos who have very high or low completion rates because they've played very little together.
End of explanation
"""
completion_rate.sort_values(inplace=True, ascending = False)
completion_rate = pd.Series(completion_rate)
completion_rate[0:10].plot(kind='barh')
"""
Explanation: We can also extract the highest-completion percentage combos.
Here we take the top-10 most reliable QB-receiver pairs.
End of explanation
"""
rush_plays = df[(df.Rusher != -1)]
rush_plays_grouped = rush_plays.groupby(by=['Rusher']).filter(lambda g: len(g) > 10).groupby(by=["Rusher"])
yards_per_carry = rush_plays_grouped["Yards.Gained"].sum() / rush_plays_grouped["Yards.Gained"].count()
yards_per_carry.sort_values(inplace=True, ascending=False)
yards_per_carry[0:10].plot(kind='barh')
"""
Explanation: Next, let's find the top ten rushers based on yards-per-carry (only for rushers who have more than 10 carries)
End of explanation
"""
def_play_groups = df.groupby("DefensiveTeam")
def_yards_allowed = def_play_groups["Yards.Gained"].sum()
def_yards_allowed.sort_values(inplace=True)
def_yards_allowed.plot(kind='barh')
"""
Explanation: Now let's take a look at defenses. Which defenses allowed the fewest overall yards?
End of explanation
"""
broncos_def_plays = df[df.DefensiveTeam == "DEN"]
broncos_def_pass_plays = broncos_def_plays[broncos_def_plays.PlayType == "Pass"]
broncos_def_rush_plays = broncos_def_plays[broncos_def_plays.PlayType == "Run"]
print("Passing yards: " + str(broncos_def_pass_plays["Yards.Gained"].sum()))
print("Rushing yards: " + str(broncos_def_rush_plays["Yards.Gained"].sum()))
"""
Explanation: It looks like the Denver Broncos allowed the fewest yards overall. Go Broncos!
Let's see if there were any specific weaknesses in the Broncos' defense.
End of explanation
"""
pass_plays = df[df.PlayType == "Pass"]
pass_plays_against_den = pass_plays[pass_plays.DefensiveTeam == "DEN"]
pass_plays_against_den_grouped = pass_plays_against_den.groupby("Passer")
qbs_yards_against_den = pass_plays_against_den_grouped["Yards.Gained"].sum()
qbs_yards_against_den.sort_values(inplace=True, ascending=False)
qbs_yards_against_den.plot(kind='barh')
"""
Explanation: It looks like they gave up a lot more passing yards than rushing yards.
Let's see what QBs caused the Denver defense the most trouble.
End of explanation
"""
oak_pass_plays_against_den = pass_plays_against_den[pass_plays_against_den.posteam == "OAK"]
oak_pass_plays_against_den_grouped = oak_pass_plays_against_den.groupby("Receiver")
oak_receivers_yards_against_den = oak_pass_plays_against_den_grouped["Yards.Gained"].sum()
oak_receivers_yards_against_den.sort_values(inplace=True, ascending=False)
oak_receivers_yards_against_den.plot(kind='barh')
"""
Explanation: It looks like the Raiders' Derek Carr caused a lot of trouble.
From this data we can see that when the Broncos play the Raiders, they need to focus more on pass defense.
To figure out which Oakland WR Denver should put their best cornerback on, let's see which Oakland receiver was the highest-performing against Denver.
End of explanation
"""
denver_interceptions = pass_plays_against_den[pass_plays_against_den.InterceptionThrown == 1]
denver_interceptions_grouped = denver_interceptions.groupby("Interceptor")
denver_cornerback_interceptions = denver_interceptions_grouped.Interceptor.count()
denver_cornerback_interceptions_sorted = denver_cornerback_interceptions.sort_values(ascending=False)
denver_cornerback_interceptions_sorted.plot(kind='barh')
"""
Explanation: M. Rivera and M. Crabtree were the dominant receivers. Going in to the game against Oakland this year, we can expext Denver to put their best cornerbacks on them.
But who are their best cornerbacks? We can best judge that by ranking Denver cornerbacks by interception counts.
End of explanation
"""
for x in df.columns:
print("* **" + str(x) + "**: Description")
df.SideofField.value_counts()
df
import seaborn as sns
cmap = sns.diverging_palette(220, 10, as_cmap=True) # one of the many color mappings
df.columns
df_plot = df[["Yards.Gained", "yrdline100", "TimeSecs", "ydsnet", "ScoreDiff", "posteam"]]
df_plot = df_plot[[x in ["DAL", "DEN"] for x in df.posteam]]
#df_imputed_jitter[['Parch','SibSp','Pclass']] = df_imputed_jitter[['Parch','SibSp','Pclass']].values + np.random.rand(len(df_imputed_jitter),3)/2
#sns.pairplot(df_plot, hue="posteam", size=2)
sns.pairplot(df_plot, size=2, hue="posteam")
#sns.violinplot(x="posteam", y="Yards.Gained", hue="PlayType", data=df, inner="quart")
den_plays = df[df.posteam == "DEN"]
den_passes = den_plays[den_plays.PlayType == "Pass"]
den_runs = den_plays[den_plays.PlayType == "Run"]
den_plays = pd.concat([den_passes, den_runs])
sns.violinplot(x="PlayType", y="Yards.Gained", data=den_plays, inner="quart")
import plotly
print (plotly.__version__) # version 1.9.x required
plotly.offline.init_notebook_mode() # run at the start of every notebook
from plotly.graph_objs import Scatter, Marker, Layout, XAxis, YAxis
plotly.offline.iplot({
'data':[
Scatter(x=den_plays.PlayType,
y=den_plays["Yards.Gained"],
text=den_plays.PlayType.values.astype(str),
marker=Marker(size=den_plays.ydstogo, sizemode='area', sizeref=1,),
mode='markers')
],
'layout': Layout(xaxis=XAxis(title='Play Type'), yaxis=YAxis(title='Yards Gained'))
}, show_link=False)
"""
Explanation: The Broncos' top choices to cover M. Rivera and M. Crabtree, therefore, are A. Talib and D. Trevathan
End of explanation
"""
|
VandyAstroML/Vandy_AstroML | profiles/Richard/Project_ScikitLearn_Datasets.ipynb | mit | %matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import cross_val_score
"""
Explanation: KNN algorithm example with the sklean digits dataset
Purpose
We will use the digits dataset to train a k-Nearest Neighbor algorithm to read hand-written numbers.
End of explanation
"""
print(digits['DESCR'])
"""
Explanation: You can read the description of the dataset by using the 'DESCR' key:
End of explanation
"""
digits = load_digits()
X = digits.data
y = digits.target
knn = KNeighborsClassifier(n_neighbors=4)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print(scores)
print(scores.mean())
k_range = range(1, 31)
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print(k_scores)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
# Displaying different keys/attributes
# of the dataset
print 'Keys:', digits.keys()
# Loading data
# This includes the pixel value for each of the samples
digits_data = digits['data']
print 'Data for 1st element:', digits_data[0]
# Targets
# This is what actual number for each sample, i.e. the 'truth'
digits_targetnames = digits['target_names']
print 'Target names:', digits_targetnames
digits_target = digits['target']
print 'Targets:', digits_target
"""
Explanation: We first begin by loading the digits dataset and setting the features matrix (X) and the response vector (y)
End of explanation
"""
# Choosing a colormap
color_map_used = plt.get_cmap('autumn')
# Visualizing some of the targets
fig, axes = plt.subplots(2,5, sharex=True, sharey=True, figsize=(20,12))
axes_f = axes.flatten()
for ii in range(len(axes_f)):
axes_f[ii].imshow(digits['images'][ii], cmap = color_map_used)
axes_f[ii].text(1, -1, 'Target: {0}'.format(digits_target[ii]), fontsize=30)
plt.show()
"""
Explanation: This means that you you have 1797 samples, and each of the them are characterized by 64 different features (pixel values).
We can also visualize some of the data, using the 'images' keys:
End of explanation
"""
IDX2 = num.where( digits_target == 2)[0]
print 'There are {0} samples of the number 2 in the dataset'.format(IDX2.size)
fig, axes = plt.subplots(2,5, sharex=True, sharey=True, figsize=(20,12))
axes_f = axes.flatten()
for ii in range(len(axes_f)):
axes_f[ii].imshow(digits['images'][IDX2][ii], cmap = color_map_used)
axes_f[ii].text(1, -1, 'Target: {0}'.format(digits_target[IDX2][ii]), fontsize=30)
plt.show()
print 'And now the number 4\n'
IDX4 = num.where( digits_target == 4)[0]
fig, axes = plt.subplots(2,5, sharex=True, sharey=True, figsize=(20,12))
axes_f = axes.flatten()
for ii in range(len(axes_f)):
axes_f[ii].imshow(digits['images'][IDX4][ii], cmap = color_map_used)
axes_f[ii].text(1, -1, 'Target: {0}'.format(digits_target[IDX4][ii]), fontsize=30)
plt.show()
"""
Explanation: The algorithm will be able to use the pixel values to determine that the first number is '0' and the other then is '4'.
Let's see some examples of the number 2:
End of explanation
"""
# Difference between two samples of the number 4
plt.imshow(digits['images'][IDX4][1] - digits['images'][IDX4][8], cmap=color_map_used)
plt.show()
"""
Explanation: You can see how different each input by subtracting one target from another.
In here, I'm subtracting two images that represent the number '4':
End of explanation
"""
|
rvm-segfault/edx | python_for_data_sci_dse200x/week3/03_Numpy_Notebook.ipynb | apache-2.0 | import numpy as np
an_array = np.array([3, 33, 333]) # Create a rank 1 array
print(type(an_array)) # The type of an ndarray is: "<class 'numpy.ndarray'>"
# test the shape of the array we just created, it should have just one dimension (Rank 1)
print(an_array.shape)
# because this is a 1-rank array, we need only one index to accesss each element
print(an_array[0], an_array[1], an_array[2])
an_array[0] =888 # ndarrays are mutable, here we change an element of the array
print(an_array)
"""
Explanation: <p style="font-family: Arial; font-size:3.75em;color:purple; font-style:bold"><br>
Introduction to numpy:
</p>
<br>
<p style="font-family: Arial; font-size:1.25em;color:#2462C0; font-style:bold"><br>
Package for scientific computing with Python
</p>
<br>
Numerical Python, or "Numpy" for short, is a foundational package on which many of the most common data science packages are built. Numpy provides us with high performance multi-dimensional arrays which we can use as vectors or matrices.
The key features of numpy are:
ndarrays: n-dimensional arrays of the same data type which are fast and space-efficient. There are a number of built-in methods for ndarrays which allow for rapid processing of data without using loops (e.g., compute the mean).
Broadcasting: a useful tool which defines implicit behavior between multi-dimensional arrays of different sizes.
Vectorization: enables numeric operations on ndarrays.
Input/Output: simplifies reading and writing of data from/to file.
<b>Additional Recommended Resources:</b><br>
<a href="https://docs.scipy.org/doc/numpy/reference/">Numpy Documentation</a><br>
<i>Python for Data Analysis</i> by Wes McKinney<br>
<i>Python Data science Handbook</i> by Jake VanderPlas
<p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Getting started with ndarray<br><br></p>
ndarrays are time and space-efficient multidimensional arrays at the core of numpy. Like the data structures in Week 2, let's get started by creating ndarrays using the numpy package.
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
How to create Rank 1 numpy arrays:
</p>
End of explanation
"""
another = np.array([[11,12,13],[21,22,23]]) # Create a rank 2 array
print(another) # print the array
print("The shape is 2 rows, 3 columns: ", another.shape) # rows x columns
print("Accessing elements [0,0], [0,1], and [1,0] of the ndarray: ", another[0, 0], ", ",another[0, 1],", ", another[1, 0])
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
How to create a Rank 2 numpy array:</p>
A rank 2 ndarray is one with two dimensions. Notice the format below of [ [row] , [row] ]. 2 dimensional arrays are great for representing matrices which are often useful in data science.
End of explanation
"""
import numpy as np
# create a 2x2 array of zeros
ex1 = np.zeros((2,2))
print(ex1)
# create a 2x2 array filled with 9.0
ex2 = np.full((2,2), 9.0)
print(ex2)
# create a 2x2 matrix with the diagonal 1s and the others 0
ex3 = np.eye(2,2)
print(ex3)
# create an array of ones
ex4 = np.ones((1,2))
print(ex4)
# notice that the above ndarray (ex4) is actually rank 2, it is a 2x1 array
print(ex4.shape)
# which means we need to use two indexes to access an element
print()
print(ex4[0,1])
# create an array of random floats between 0 and 1
ex5 = np.random.random((2,2))
print(ex5)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
There are many way to create numpy arrays:
</p>
Here we create a number of different size arrays with different shapes and different pre-filled values. numpy has a number of built in methods which help us quickly and easily create multidimensional arrays.
End of explanation
"""
import numpy as np
# Rank 2 array of shape (3, 4)
an_array = np.array([[11,12,13,14], [21,22,23,24], [31,32,33,34]])
print(an_array)
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Array Indexing
<br><br></p>
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Slice indexing:
</p>
Similar to the use of slice indexing with lists and strings, we can use slice indexing to pull out sub-regions of ndarrays.
End of explanation
"""
a_slice = an_array[:2, 1:3]
print(a_slice)
"""
Explanation: Use array slicing to get a subarray consisting of the first 2 rows x 2 columns.
End of explanation
"""
print("Before:", an_array[0, 1]) #inspect the element at 0, 1
a_slice[0, 0] = 1000 # a_slice[0, 0] is the same piece of data as an_array[0, 1]
print("After:", an_array[0, 1])
"""
Explanation: When you modify a slice, you actually modify the underlying array.
End of explanation
"""
# Create a Rank 2 array of shape (3, 4)
an_array = np.array([[11,12,13,14], [21,22,23,24], [31,32,33,34]])
print(an_array)
# Using both integer indexing & slicing generates an array of lower rank
row_rank1 = an_array[1, :] # Rank 1 view
print(row_rank1, row_rank1.shape) # notice only a single []
# Slicing alone: generates an array of the same rank as the an_array
row_rank2 = an_array[1:2, :] # Rank 2 view
print(row_rank2, row_rank2.shape) # Notice the [[ ]]
#We can do the same thing for columns of an array:
print()
col_rank1 = an_array[:, 1]
col_rank2 = an_array[:, 1:2]
print(col_rank1, col_rank1.shape) # Rank 1
print()
print(col_rank2, col_rank2.shape) # Rank 2
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Use both integer indexing & slice indexing
</p>
We can use combinations of integer indexing and slice indexing to create different shaped matrices.
End of explanation
"""
# Create a new array
an_array = np.array([[11,12,13], [21,22,23], [31,32,33], [41,42,43]])
print('Original Array:')
print(an_array)
# Create an array of indices
col_indices = np.array([0, 1, 2, 0])
print('\nCol indices picked : ', col_indices)
row_indices = np.arange(4)
print('\nRows indices picked : ', row_indices)
# Examine the pairings of row_indices and col_indices. These are the elements we'll change next.
for row,col in zip(row_indices,col_indices):
print(row, ", ",col)
# Select one element from each row
print('Values in the array at those indices: ',an_array[row_indices, col_indices])
# Change one element from each row using the indices selected
an_array[row_indices, col_indices] += 100000
print('\nChanged Array:')
print(an_array)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Array Indexing for changing elements:
</p>
Sometimes it's useful to use an array of indexes to access or change elements.
End of explanation
"""
# create a 3x2 array
an_array = np.array([[11,12], [21, 22], [31, 32]])
print(an_array)
# create a filter which will be boolean values for whether each element meets this condition
filter = (an_array > 15)
filter
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Boolean Indexing
<br><br></p>
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Array Indexing for changing elements:
</p>
End of explanation
"""
# we can now select just those elements which meet that criteria
print(an_array[filter])
# For short, we could have just used the approach below without the need for the separate filter array.
an_array[(an_array % 2 == 0)]
"""
Explanation: Notice that the filter is a same size ndarray as an_array which is filled with True for each element whose corresponding element in an_array which is greater than 15 and False for those elements whose value is less than 15.
End of explanation
"""
an_array[an_array % 2 == 0] +=100
print(an_array)
"""
Explanation: What is particularly useful is that we can actually change elements in the array applying a similar logical filter. Let's add 100 to all the even values.
End of explanation
"""
ex1 = np.array([11, 12]) # Python assigns the data type
print(ex1.dtype)
ex2 = np.array([11.0, 12.0]) # Python assigns the data type
print(ex2.dtype)
ex3 = np.array([11, 21], dtype=np.int64) #You can also tell Python the data type
print(ex3.dtype)
# you can use this to force floats into integers (using floor function)
ex4 = np.array([11.1,12.7], dtype=np.int64)
print(ex4.dtype)
print()
print(ex4)
# you can use this to force integers into floats if you anticipate
# the values may change to floats later
ex5 = np.array([11, 21], dtype=np.float64)
print(ex5.dtype)
print()
print(ex5)
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Datatypes and Array Operations
<br><br></p>
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Datatypes:
</p>
End of explanation
"""
x = np.array([[111,112],[121,122]], dtype=np.int)
y = np.array([[211.1,212.1],[221.1,222.1]], dtype=np.float64)
print(x)
print()
print(y)
# add
print(x + y) # The plus sign works
print()
print(np.add(x, y)) # so does the numpy function "add"
# subtract
print(x - y)
print()
print(np.subtract(x, y))
# multiply
print(x * y)
print()
print(np.multiply(x, y))
# divide
print(x / y)
print()
print(np.divide(x, y))
# square root
print(np.sqrt(x))
# exponent (e ** x)
print(np.exp(x))
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Arithmetic Array Operations:
</p>
End of explanation
"""
# setup a random 2 x 4 matrix
arr = 10 * np.random.randn(2,5)
print(arr)
# compute the mean for all elements
print(arr.mean())
# compute the means by row
print(arr.mean(axis = 1))
# compute the means by column
print(arr.mean(axis = 0))
# sum all the elements
print(arr.sum())
# compute the medians
print(np.median(arr, axis = 1))
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Statistical Methods, Sorting, and <br> <br> Set Operations:
<br><br>
</p>
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Basic Statistical Operations:
</p>
End of explanation
"""
# create a 10 element array of randoms
unsorted = np.random.randn(10)
print(unsorted)
# create copy and sort
sorted = np.array(unsorted)
sorted.sort()
print(sorted)
print()
print(unsorted)
# inplace sorting
unsorted.sort()
print(unsorted)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Sorting:
</p>
End of explanation
"""
array = np.array([1,2,1,4,2,1,4,2])
print(np.unique(array))
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Finding Unique elements:
</p>
End of explanation
"""
s1 = np.array(['desk','chair','bulb'])
s2 = np.array(['lamp','bulb','chair'])
print(s1, s2)
print( np.intersect1d(s1, s2) )
print( np.union1d(s1, s2) )
print( np.setdiff1d(s1, s2) )# elements in s1 that are not in s2
print( np.in1d(s1, s2) )#which element of s1 is also in s2
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Set Operations with np.array data type:
</p>
End of explanation
"""
import numpy as np
start = np.zeros((4,3))
print(start)
# create a rank 1 ndarray with 3 values
add_rows = np.array([1, 0, 2])
print(add_rows)
y = start + add_rows # add to each row of 'start' using broadcasting
print(y)
# create an ndarray which is 4 x 1 to broadcast across columns
add_cols = np.array([[0,1,2,3]])
add_cols = add_cols.T
print(add_cols)
# add to each column of 'start' using broadcasting
y = start + add_cols
print(y)
# this will just broadcast in both dimensions
add_scalar = np.array([1])
print(start+add_scalar)
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Broadcasting:
<br><br>
</p>
Introduction to broadcasting. <br>
For more details, please see: <br>
https://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html
End of explanation
"""
# create our 3x4 matrix
arrA = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
print(arrA)
# create our 4x1 array
arrB = [0,1,0,2]
print(arrB)
# add the two together using broadcasting
print(arrA + arrB)
"""
Explanation: Example from the slides:
End of explanation
"""
from numpy import arange
from timeit import Timer
size = 1000000
timeits = 1000
# create the ndarray with values 0,1,2...,size-1
nd_array = arange(size)
print( type(nd_array) )
# timer expects the operation as a parameter,
# here we pass nd_array.sum()
timer_numpy = Timer("nd_array.sum()", "from __main__ import nd_array")
print("Time taken by numpy ndarray: %f seconds" %
(timer_numpy.timeit(timeits)/timeits))
# create the list with values 0,1,2...,size-1
a_list = list(range(size))
print (type(a_list) )
# timer expects the operation as a parameter, here we pass sum(a_list)
timer_list = Timer("sum(a_list)", "from __main__ import a_list")
print("Time taken by list: %f seconds" %
(timer_list.timeit(timeits)/timeits))
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Speedtest: ndarrays vs lists
<br><br>
</p>
First setup paramaters for the speed test. We'll be testing time to sum elements in an ndarray versus a list.
End of explanation
"""
x = np.array([ 23.23, 24.24] )
np.save('an_array', x)
np.load('an_array.npy')
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Read or Write to Disk:
<br><br>
</p>
<p style="font-family: Arial; font-size:1.3em;color:#2462C0; font-style:bold"><br>
Binary Format:</p>
End of explanation
"""
np.savetxt('array.txt', X=x, delimiter=',')
!cat array.txt
np.loadtxt('array.txt', delimiter=',')
"""
Explanation: <p style="font-family: Arial; font-size:1.3em;color:#2462C0; font-style:bold"><br>
Text Format:</p>
End of explanation
"""
# determine the dot product of two matrices
x2d = np.array([[1,1],[1,1]])
y2d = np.array([[2,2],[2,2]])
print(x2d.dot(y2d))
print()
print(np.dot(x2d, y2d))
# determine the inner product of two vectors
a1d = np.array([9 , 9 ])
b1d = np.array([10, 10])
print(a1d.dot(b1d))
print()
print(np.dot(a1d, b1d))
# dot produce on an array and vector
print(x2d.dot(a1d))
print()
print(np.dot(x2d, a1d))
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Additional Common ndarray Operations
<br><br></p>
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Dot Product on Matrices and Inner Product on Vectors:
</p>
End of explanation
"""
# sum elements in the array
ex1 = np.array([[11,12],[21,22]])
print(np.sum(ex1)) # add all members
print(np.sum(ex1, axis=0)) # columnwise sum
print(np.sum(ex1, axis=1)) # rowwise sum
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Sum:
</p>
End of explanation
"""
# random array
x = np.random.randn(8)
x
# another random array
y = np.random.randn(8)
y
# returns element wise maximum between two arrays
np.maximum(x, y)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Element-wise Functions: </p>
For example, let's compare two arrays values to get the maximum of each.
End of explanation
"""
# grab values from 0 through 19 in an array
arr = np.arange(20)
print(arr)
# reshape to be a 4 x 5 matrix
arr.reshape(4,5)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Reshaping array:
</p>
End of explanation
"""
# transpose
ex1 = np.array([[11,12],[21,22]])
ex1.T
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Transpose:
</p>
End of explanation
"""
x_1 = np.array([1,2,3,4,5])
y_1 = np.array([11,22,33,44,55])
filter = np.array([True, False, True, False, True])
out = np.where(filter, x_1, y_1)
print(out)
mat = np.random.rand(5,5)
mat
np.where( mat > 0.5, 1000, -1)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Indexing using where():</p>
End of explanation
"""
arr_bools = np.array([ True, False, True, True, False ])
arr_bools.any()
arr_bools.all()
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
"any" or "all" conditionals:</p>
End of explanation
"""
Y = np.random.normal(size = (1,5))[0]
print(Y)
Z = np.random.randint(low=2,high=50,size=4)
print(Z)
np.random.permutation(Z) #return a new ordering of elements in Z
np.random.uniform(size=4) #uniform distribution
np.random.normal(size=4) #normal distribution
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Random Number Generation:
</p>
End of explanation
"""
K = np.random.randint(low=2,high=50,size=(2,2))
print(K)
print()
M = np.random.randint(low=2,high=50,size=(2,2))
print(M)
np.vstack((K,M))
np.hstack((K,M))
np.concatenate([K, M], axis = 0)
np.concatenate([K, M.T], axis = 1)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br>
Merging data sets:
</p>
End of explanation
"""
|
seg/2016-ml-contest | LA_Team/Facies_classification_LA_TEAM_06.ipynb | apache-2.0 | %%sh
pip install pandas
pip install scikit-learn
pip install tpot
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.signal import medfilt
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
"""
Explanation: Facies classification using Machine Learning
LA Team Submission 6 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:
- Features Exploration: based on Paolo Bestagini's work, we will consider imputation, normalization and augmentation routines for the initial features.
- Model tuning:
Libraries
We will need to install the following libraries and packages.
End of explanation
"""
#Load Data
data = pd.read_csv('../facies_vectors.csv')
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
# Store features and labels
X = data[feature_names].values
y = data['Facies'].values
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
# Fill 'PE' missing values with mean
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(X)
X = imp.transform(X)
"""
Explanation: Data Preprocessing
End of explanation
"""
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(X, well, depth)
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
def preprocess():
# Preprocess data to use in model
X_train_aux = []
X_test_aux = []
y_train_aux = []
y_test_aux = []
# For each data split
split = split_list[5]
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
X_train_aux.append( X_tr )
X_test_aux.append( X_v )
y_train_aux.append( y_tr )
y_test_aux.append ( y_v )
X_train = np.concatenate( X_train_aux )
X_test = np.concatenate ( X_test_aux )
y_train = np.concatenate ( y_train_aux )
y_test = np.concatenate ( y_test_aux )
return X_train , X_test , y_train , y_test
"""
Explanation: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
End of explanation
"""
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = preprocess()
tpot = TPOTClassifier(generations=5, population_size=100,
verbosity=2, max_eval_time_mins=30,
max_time_mins=6*60, scoring='f1_micro',
random_state = 17)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
#tpot.export('FinalPipeline_LM_long_2.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer, StandardScaler
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
clf = make_pipeline(
make_union(VotingClassifier([("est", BernoulliNB(alpha=0.41000000000000003, binarize=0.43, fit_prior=True))]), FunctionTransformer(lambda X: X)),
StandardScaler(),
RandomForestClassifier(n_estimators=500)
)
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
"""
Explanation: Data Analysis
In this section we will run a Cross Validation routine
End of explanation
"""
#Load testing data
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare training data
X_tr = X
y_tr = y
# Augment features
X_tr, padded_rows = augment_features(X_tr, well, depth)
# Removed padded rows
X_tr = np.delete(X_tr, padded_rows, axis=0)
y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('Prediction_XXI_LM_Final.csv')
"""
Explanation: Prediction
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.18/_downloads/3a40d74661a066ddd49c83c766d57670/plot_visualize_epochs.ipynb | bsd-3-clause | # sphinx_gallery_thumbnail_number = 7
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(
op.join(data_path, 'sample_audvis_raw.fif'), preload=True)
raw.load_data().filter(None, 9, fir_design='firwin')
raw.set_eeg_reference('average', projection=True) # set EEG average reference
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5)
"""
Explanation: Visualize Epochs data
End of explanation
"""
epochs.plot(block=True)
"""
Explanation: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
Events used for constructing the epochs here are the triggers for subject
being presented a smiley face at the center of the visual field. More of the
paradigm at BABDHIFJ.
All plotting functions start with plot. Let's start with the most
obvious. :func:mne.Epochs.plot offers an interactive browser that allows
rejection by hand when called in combination with a keyword block=True.
This blocks the execution of the script until the browser window is closed.
End of explanation
"""
events = mne.pick_events(events, include=[5, 32])
mne.viz.plot_events(events)
epochs['smiley'].plot(events=events)
"""
Explanation: The numbers at the top refer to the event id of the epoch. The number at the
bottom is the running numbering for the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 1 seems to
be contaminated by a blink (scroll to the bottom to view the EOG channel).
This epoch can be marked for rejection by clicking on top of the browser
window. The epoch should turn red when you click it. This means that it will
be dropped as the browser window is closed.
It is possible to plot event markers on epoched data by passing events
keyword to the epochs plotter. The events are plotted as vertical lines and
they follow the same coloring scheme as :func:mne.viz.plot_events. The
events plotter gives you all the events with a rough idea of the timing.
Since the colors are the same, the event plotter can also function as a
legend for the epochs plotter events. It is also possible to pass your own
colors via event_colors keyword. Here we can plot the reaction times
between seeing the smiley face and the button press (event 32).
When events are passed, the epoch numbering at the bottom is switched off by
default to avoid overlaps. You can turn it back on via settings dialog by
pressing o key. You should check out help at the lower left corner of the
window for more information about the interactive features.
End of explanation
"""
epochs.plot_image(278, cmap='interactive', sigma=1., vmin=-250, vmax=250)
"""
Explanation: To plot individual channels as an image, where you see all the epochs at one
glance, you can use function :func:mne.Epochs.plot_image. It shows the
amplitude of the signal over all the epochs plus an average (evoked response)
of the activation. We explicitly set interactive colorbar on (it is also on
by default for plotting functions with a colorbar except the topo plots). In
interactive mode you can scale and change the colormap with mouse scroll and
up/down arrow keys. You can also drag the colorbar with left/right mouse
button. Hitting space bar resets the scale.
End of explanation
"""
epochs.plot_image(combine='gfp', group_by='type', sigma=2., cmap="YlGnBu_r")
"""
Explanation: We can also give an overview of all channels by calculating the global
field power (or other other aggregation methods). However, combining
multiple channel types (e.g., MEG and EEG) in this way is not sensible.
Instead, we can use the group_by parameter. Setting group_by to
'type' combines channels by type.
group_by can also be used to group channels into arbitrary groups, e.g.
regions of interests, by providing a dictionary containing
group name -> channel indices mappings.
End of explanation
"""
epochs.plot_topo_image(vmin=-250, vmax=250, title='ERF images', sigma=2.,
fig_facecolor='w', font_color='k')
"""
Explanation: You also have functions for plotting channelwise information arranged into a
shape of the channel array. The image plotting uses automatic scaling by
default, but noisy channels and different channel types can cause the scaling
to be a bit off. Here we define the limits by hand.
End of explanation
"""
|
deepmind/deepmind-research | gated_linear_networks/colabs/dendritic_gated_network.ipynb | apache-2.0 | # Copyright 2021 DeepMind Technologies Limited. All rights reserved.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from sklearn import datasets
from sklearn import preprocessing
from sklearn import model_selection
from typing import List, Optional
"""
Explanation: Simple Dendritic Gated Networks in numpy
This colab implements a Dendritic Gated Network (DGN) solving a regression (using quadratic loss) or a binary classification problem (using Bernoulli log loss).
See our paper titled "A rapid and efficient learning rule for biological neural circuits" for details of the DGN model.
Some implementation details:
- We utilize sklearn.datasets.load_breast_cancer for binary classification and sklearn.datasets.load_diabetes for regression.
- This code is meant for educational purposes only. It is not optimized for high-performance, both in terms of computational efficiency and quality of fit.
- Network is trained on 80% of the dataset and tested on the rest. For classification, we report log loss (negative log likelihood) and accuracy (percentage of correctly identified labels). For regression, we report MSE expressed in units of target variance.
End of explanation
"""
do_classification = True # if False, does regression
"""
Explanation: Choose classification or regression
End of explanation
"""
if do_classification:
features, targets = datasets.load_breast_cancer(return_X_y=True)
else:
features, targets = datasets.load_diabetes(return_X_y=True)
x_train, x_test, y_train, y_test = model_selection.train_test_split(
features, targets, test_size=0.2, random_state=0)
n_features = x_train.shape[-1]
# Input features are centered and scaled to unit variance:
feature_encoder = preprocessing.StandardScaler()
x_train = feature_encoder.fit_transform(x_train)
x_test = feature_encoder.transform(x_test)
if not do_classification:
# Continuous targets are centered and scaled to unit variance:
target_encoder = preprocessing.StandardScaler()
y_train = np.squeeze(target_encoder.fit_transform(y_train[:, np.newaxis]))
y_test = np.squeeze(target_encoder.transform(y_test[:, np.newaxis]))
"""
Explanation: Load dataset
End of explanation
"""
def step_square_loss(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using square loss."""
r_in = inputs
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([1., r_in]) # add biases
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
r_out = effective_weights.dot(r_in)
if update:
grad = (r_out[:, None] - target) * r_in[None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = (target - r_out)**2 / 2
return r_out, loss
def sigmoid(x): # numerically stable sigmoid
return np.exp(-np.logaddexp(0, -x))
def inverse_sigmoid(x):
return np.log(x/(1-x))
def step_bernoulli(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
epsilon: float = 0.01,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using Bernoulli log loss."""
r_in = np.clip(sigmoid(inputs), epsilon, 1-epsilon)
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([sigmoid(1.), r_in]) # add biases
h_in = inverse_sigmoid(r_in)
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
h_out = effective_weights.dot(h_in)
r_out_unclipped = sigmoid(h_out)
r_out = np.clip(r_out_unclipped, epsilon, 1 - epsilon)
if update:
update_indicator = np.abs(target - r_out_unclipped) > epsilon
grad = (r_out[:, None] - target) * h_in[None] * update_indicator[:, None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = - (target * np.log(r_out) + (1 - target) * np.log(1 - r_out))
return r_out, loss
def forward_pass(step_fn, x, y, weights, hyperplanes, learning_rate, update):
losses, outputs = np.zeros(len(y)), np.zeros(len(y))
for i, (x_i, y_i) in enumerate(zip(x, y)):
outputs[i], losses[i] = step_fn(x_i, weights, hyperplanes, target=y_i,
learning_rate=learning_rate, update=update)
return np.mean(losses), outputs
"""
Explanation: DGN inference/update
End of explanation
"""
# number of neurons per layer, the last element must be 1
n_neurons = np.array([100, 10, 1])
n_branches = 20 # number of dendritic brancher per neuron
"""
Explanation: Define architecture
End of explanation
"""
n_inputs = np.hstack([n_features + 1, n_neurons[:-1] + 1]) # 1 for the bias
dgn_weights = [np.zeros((n_neuron, n_branches, n_input))
for n_neuron, n_input in zip(n_neurons, n_inputs)]
# Fixing random seed for reproducibility:
np.random.seed(12345)
dgn_hyperplanes = [
np.random.normal(0, 1, size=(n_neuron, n_branches, n_features + 1))
for n_neuron in n_neurons]
# By default, the weight parameters are drawn from a normalised Gaussian:
dgn_hyperplanes = [
h_ / np.linalg.norm(h_[:, :, :-1], axis=(1, 2))[:, None, None]
for h_ in dgn_hyperplanes]
"""
Explanation: Initialise weights and gating parameters
End of explanation
"""
if do_classification:
eta = 1e-4
n_epochs = 3
step = step_bernoulli
else:
eta = 1e-5
n_epochs = 10
step = step_square_loss
if do_classification:
step = step_bernoulli
else:
step = step_square_loss
print('Training on {} problem for {} epochs with learning rate {}.'.format(
['regression', 'classification'][do_classification], n_epochs, eta))
print('This may take a minute. Please be patient...')
for epoch in range(0, n_epochs + 1):
train_loss, train_pred = forward_pass(
step, x_train, y_train, dgn_weights,
dgn_hyperplanes, eta, update=(epoch > 0))
test_loss, test_pred = forward_pass(
step, x_test, y_test, dgn_weights,
dgn_hyperplanes, eta, update=False)
to_print = 'epoch: {}, test loss: {:.3f} (train: {:.3f})'.format(
epoch, test_loss, train_loss)
if do_classification:
accuracy_train = np.mean(np.round(train_pred) == y_train)
accuracy = np.mean(np.round(test_pred) == y_test)
to_print += ', test accuracy: {:.3f} (train: {:.3f})'.format(
accuracy, accuracy_train)
print(to_print)
"""
Explanation: Train
End of explanation
"""
|
Alexoner/skynet | notebooks/knn.ipynb | mit | # Run some setup code for this notebook.
import random
import numpy as np
from skynet.utils.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = '../skynet/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from skynet.linear import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
"""
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
"""
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
"""
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
"""
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
"""
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
Bright rows indicates that the test data/image is not similar to most of the images in the training set.
Bright columns indicates this column's training data is not similar to most of test data, which means it's not that useful.
This is maybe due to noise, or other generalization problem that KNN suffers from.
End of explanation
"""
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
"""
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
"""
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
"""
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
"""
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
classifier = KNearestNeighbor()
k_to_accuracies[k] = []
for i in range(num_folds):
X_train_i = np.vstack(X_train_folds[:i] + X_train_folds[i+1:])
y_train_i = np.hstack(y_train_folds[:i] + y_train_folds[i+1:])
X_val_i = X_train_folds[i]
y_val_i = y_train_folds[i]
classifier.train(X_train_i,y_train_i)
y_val_pred = classifier.predict(X_val_i, k)
num_correct = np.sum(y_val_pred == y_val_i)
accuracy = float(num_correct) / len(y_val_i)
k_to_accuracies[k].append(accuracy)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
"""
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation
"""
|
Hugovdberg/timml | notebooks/timml_notebook1_sol.ipynb | mit | %matplotlib inline
from pylab import *
from timml import *
figsize=(8, 8)
ml = ModelMaq(kaq=[10, 20, 5],
z=[0, -20, -40, -80, -90, -140],
c=[4000, 10000])
w = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)
Constant(ml, xr=10000, yr=0, hr=20, layer=0)
Uflow(ml, slope=0.002, angle=0)
ml.solve()
"""
Explanation: TimML Notebook 1
A well in uniform flow
Consider a well in the middle aquifer of a three aquifer system. Aquifer properties are given in Table 1. The well is located at $(x,y)=(0,0)$, the discharge is $Q=10,000$ m$^3$/d and the radius is 0.2 m. There is a uniform flow from West to East with a gradient of 0.002. The head is fixed to 20 m at a distance of 10,000 m downstream of the well. Here is the cookbook recipe to build this model:
Import pylab to use numpy and plotting: from pylab import *
Set figures to be in the notebook with %matplotlib notebook
Import everything from TimML: from timml import *
Create the model and give it a name, for example ml with the command ml = ModelMaq(kaq, z, c) (substitute the correct lists for kaq, z, and c).
Enter the well with the command w = Well(ml, xw, yw, Qw, rw, layers), where the well is called w.
Enter uniform flow with the command Uflow(ml, slope, angle).
Enter the reference head with Constant(ml, xr, yr, head, layer).
Solve the model ml.solve()
Table 1: Aquifer data for exercise 1
|Layer |$k$ (m/d)|$z_b$ (m)|$z_t$|$c$ (days)|
|-------------|--------:|--------:|----:|---------:|
|Aquifer 0 | 10 | -20 | 0 | - |
|Leaky Layer 1| - | -40 | -20 | 4000 |
|Aquifer 1 | 20 | -80 | -40 | - |
|Leaky Layer 2| - | -90 | -80 | 10000 |
|Aquifer 2 | 5 | -140 | -90 | - ||
End of explanation
"""
print('The leakage factors of the aquifers are:')
print(ml.aq.lab)
"""
Explanation: Questions:
Exercise 1a
What are the leakage factors of the aquifer system?
End of explanation
"""
print('The head at the well is:')
print(w.headinside())
"""
Explanation: Exercise 1b
What is the head at the well?
End of explanation
"""
ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[0, 1, 2], levels=10,
legend=True, figsize=figsize)
"""
Explanation: Exercise 1c
Create a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor.
End of explanation
"""
ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[1], levels=np.arange(30, 45, 1),
labels=True, legend=['layer 1'], figsize=figsize)
"""
Explanation: Exercise 1d
Create a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default.
End of explanation
"""
win=[-3000, 3000, -3000, 3000]
ml.plot(win=win, orientation='both', figsize=figsize)
ml.tracelines(-2000 * ones(3), -1000 * ones(3), [-120, -60, -10], hstepmax=50,
win=win, orientation='both')
ml.tracelines(0 * ones(3), 1000 * ones(3), [-120, -50, -10], hstepmax=50,
win=win, orientation='both')
"""
Explanation: Exercise 1e
Create a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations.
End of explanation
"""
ml = ModelMaq(kaq=[10, 20, 5],
z=[0, -20, -40, -80, -90, -140],
c=[4000, 10000])
w = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)
Constant(ml, xr=10000, yr=0, hr=20, layer=0)
Uflow(ml, slope=0.002, angle=0)
wabandoned = Well(ml, xw=100, yw=100, Qw=0, rw=0.2, layers=[0, 1])
ml.solve()
ml.contour(win=[-200, 200, -200, 200], ngr=50, layers=[0, 2],
levels=20, color=['C0', 'C1', 'C2'], legend=True, figsize=figsize)
print('The head at the abandoned well is:')
print(wabandoned.headinside())
print('The discharge at the abandoned well is:')
print(wabandoned.discharge())
"""
Explanation: Exercise 1f
Add an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again!
End of explanation
"""
|
gbtimmon/ase16GBT | code/6/pwang13.ipynb | unlicense | %matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "pwang13"
class O:
"""
Basic Class which
- Helps dynamic updates
- Pretty Prints
"""
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
"""
Explanation: Optimizing Real World Problems
In this workshop we will code up a model called POM3 and optimize it using the GA we developed in the first workshop.
POM3 is a software estimation model like XOMO for Software Engineering. It is based on Turner
and Boehm’s model of agile development. It compares traditional plan-based approaches
to agile-based approaches in requirements prioritization. It describes how a team decides which
requirements to implement next. POM3 reveals requirements incrementally in random order, with
which developers plan their work assignments. These assignments are further adjusted based on
current cost and priority of requirement. POM3 is a realistic model which takes more runtime than
standard mathematical models(2-100ms, not 0.006-0.3ms)
End of explanation
"""
# Few Utility functions
def say(*lst):
"""
Print whithout going to new line
"""
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
"""
Generate a random number between low and high.
decimals incidicate number of decimal places
"""
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
"""
Shuffle a list
"""
random.shuffle(lst)
return lst
class Decision(O):
"""
Class indicating Decision of a problem
"""
def __init__(self, name, low, high):
"""
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
"""
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
"""
Class indicating Objective of a problem
"""
def __init__(self, name, do_minimize=True, low=0, high=1):
"""
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
"""
O.__init__(self, name=name, do_minimize=do_minimize, low=low, high=high)
def normalize(self, val):
return (val - self.low)/(self.high - self.low)
class Point(O):
"""
Represents a member of the population
"""
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions[:])
new.objectives = self.objectives[:]
return new
class Problem(O):
"""
Class representing the cone problem.
"""
def __init__(self, decisions, objectives):
"""
Initialize Problem.
:param decisions - Metadata for Decisions
:param objectives - Metadata for Objectives
"""
O.__init__(self)
self.decisions = decisions
self.objectives = objectives
@staticmethod
def evaluate(point):
assert False
return point.objectives
@staticmethod
def is_valid(point):
return True
def generate_one(self, retries = 20):
for _ in xrange(retries):
point = Point([random_value(d.low, d.high) for d in self.decisions])
if self.is_valid(point):
return point
raise RuntimeError("Exceeded max runtimes of %d" % 20)
"""
Explanation: The Generic Problem Class
Remember the Problem Class we coded up for GA workshop. Here we abstract it further such that it can be inherited by all the future classes. Go through these utility functions and classes before you proceed further.
End of explanation
"""
class POM3(Problem):
from pom3.pom3 import pom3 as pom3_helper
helper = pom3_helper()
def __init__(self):
"""
Initialize the POM3 classes
"""
names = ["Culture", "Criticality", "Criticality Modifier", "Initial Known",
"Inter-Dependency", "Dynamism", "Size", "Plan", "Team Size"]
lows = [0.1, 0.82, 2, 0.40, 1, 1, 0, 0, 1]
highs = [0.9, 1.20, 10, 0.70, 100, 50, 4, 5, 44]
# TODO 2: Use names, lows and highs defined above to code up decision
# and objective metadata for POM3.
decisions = [Decision(n, l, h) for n , l, h in zip(names, lows, highs)]
objectives = [Objective("Cost", True, 0, 1000), Objective("Score", False, 0, 1),
Objective("Completion", False, 0, 1), Objective("Idle", True, 0, 1)]
# objectives = [Objective("Cost", True, 0, 1000), Objective("Score", False, 0, 1),
# Objective("Completion", False, 0, 1), Objective("idle". True, 0, 1)]
Problem.__init__(self, decisions, objectives)
@staticmethod
def evaluate(point):
if not point.objectives:
point.objectives = POM3.helper.simulate(point.decisions)
return point.objectives
pom3 = POM3()
one = pom3.generate_one()
print(POM3.evaluate(one))
"""
Explanation: Great. Now that the class and its basic methods is defined, lets extend it for
POM3 model.
POM3 has multiple versions but for this workshop we will code up the POM3A model. It has 9 decisions defined as follows
Culture in [0.1, 0.9]
Criticality in [0.82, 1.20]
Criticality Modifier in [2, 10]
Initially Known in [0.4, 0.7]
Inter-Dependency in [1, 100]
Dynamism in [1, 50]
Size in [0, 4]
Plan in [0, 5]
Team Size in [1, 44]
<img src="pom3.png"/>
The model has 4 objectives
* Cost in [0,10000] - Minimize
* Score in [0,1] - Maximize
* Completion in [0,1] - Maximize
* Idle in [0,1] - Minimize
End of explanation
"""
def populate(problem, size):
"""
Create a Point list of length size
"""
population = []
for _ in range(size):
population.append(problem.generate_one())
return population
def crossover(mom, dad):
"""
Create a new point which contains decisions from
the first half of mom and second half of dad
"""
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
def mutate(problem, point, mutation_rate=0.01):
"""
Iterate through all the decisions in the point
and if the probability is less than mutation rate
change the decision(randomly set it between its max and min).
"""
for i, decision in enumerate(problem.decisions):
if random.random() < mutation_rate:
point.decisions[i] = random_value(decision.low, decision.high)
return point
def bdom(problem, one, two):
"""
Return if one dominates two based
on binary domintation
"""
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
for i, obj in enumerate(problem.objectives):
better = lt if obj.do_minimize else gt
if better(objs_one[i], objs_two[i]):
dominates = True
elif objs_one[i] != objs_two[i]:
return False
return dominates
def fitness(problem, population, point, dom_func):
"""
Evaluate fitness of a point based on the definition in the previous block.
For example point dominates 5 members of population,
then fitness of point is 5.
"""
return len([1 for another in population if dom_func(problem, point, another)])
def elitism(problem, population, retain_size, dom_func):
"""
Sort the population with respect to the fitness
of the points and return the top 'retain_size' points of the population
"""
fitnesses = []
for point in population:
fitnesses.append((fitness(problem, population, point, dom_func), point))
population = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return population[:retain_size]
"""
Explanation: Utility functions for genetic algorithms.
End of explanation
"""
def ga(pop_size = 100, gens = 250, dom_func=bdom):
problem = POM3()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size, dom_func)
gen += 1
print("")
return initial_population, population
"""
Explanation: Putting it all together and making the GA
End of explanation
"""
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[1] for i in initial_objs]
initial_y = [i[2] for i in initial_objs]
final_x = [i[1] for i in final_objs]
final_y = [i[2] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Score")
plt.xlabel("Completion")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga(gens=50)
plot_pareto(initial, final)
"""
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation
"""
|
computational-class/cjc2016 | code/08.07-analyzing_titanic_dataset.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Introduction to the Basics of Statistics
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
一、使用Pandas清洗泰坦尼克数据
练习使用Pandas
二、分析天涯回帖数据
学习使用Statsmodels
End of explanation
"""
import pandas as pd
"""
Explanation:
End of explanation
"""
import pandas as pd
train = pd.read_csv('../data/tatanic_train.csv',\
sep = ",", header=0)
test = pd.read_csv('../data/tatanic_test.csv',\
sep = ",", header=0)
"""
Explanation: Statsmodels
http://statsmodels.sourceforge.net/
Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests.
An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.
Researchers across fields may find that statsmodels fully meets their needs for statistical computing and data analysis in Python.
使用pandas清洗泰坦尼克数据
从本机读取数据
End of explanation
"""
train.head()
train.describe()
train.shape#, len(train)
#train.columns
# Passengers that survived vs passengers that passed away
train["Survived"][:3]
# Passengers that survived vs passengers that passed away
train["Survived"].value_counts()
# As proportions
train["Survived"].value_counts(normalize = True)
train['Sex'].value_counts()
train[train['Sex']=='female'][:3]#[train['Pclass'] == 3]
# Males that survived vs males that passed away
train[["Survived", 'Fare']][train["Sex"] == 'male'][:3]
# Males that survived vs males that passed away
train["Survived"][train["Sex"] == 'male'].value_counts()
# Females that survived vs Females that passed away
train["Survived"][train["Sex"] == 'female'].value_counts()
# Normalized male survival
train["Survived"][train["Sex"] == 'male'].value_counts(normalize = True)
# Normalized female survival
train["Survived"][train["Sex"] == 'female'].value_counts(normalize = True)
# Create the column Child, and indicate whether child or not a child. Print the new column.
train["Child"] = float('NaN')
train.Child[train.Age < 5] = 1
train.Child[train.Age >= 5] = 0
print(train.Child[:3])
# Normalized Survival Rates for under 18
train.Survived[train.Child == 1].value_counts(normalize = True)
# Normalized Survival Rates for over 18
train.Survived[train.Child == 0].value_counts(normalize = True)
age = pd.cut(train['Age'], [0, 18, 80])
train.pivot_table('Survived', ['Sex', age], 'Pclass')
fare = pd.qcut(train['Fare'], 2)
train.pivot_table('Survived', ['Sex', age], [fare, 'Pclass'])
# Create a copy of test: test_one
test_one = test
# Initialize a Survived column to 0
test_one['Survived'] = 0
# Set Survived to 1 if Sex equals "female" and print the `Survived` column from `test_one`
test_one.Survived[test_one.Sex =='female'] = 1
print(test_one.Survived[:3])
#Convert the male and female groups to integer form
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
#Impute the Embarked variable
train["Embarked"] = train["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
"""
Explanation: You can easily explore a DataFrame
- .describe() summarizes the columns/features of the DataFrame, including the count of observations, mean, max and so on.
- Another useful trick is to look at the dimensions of the DataFrame. This is done by requesting the .shape attribute of your DataFrame object. (ex. your_data.shape)
End of explanation
"""
df = pd.read_csv('../data/tianya_bbs_threads_list.txt',\
sep = "\t", names = ['title','link', \
'author','author_page',\
'click','reply','time'])
df[:2]
# df=df.rename(columns = {0:'title', 1:'link', \
# 2:'author',3:'author_page',\
# 4:'click', 5:'reply', 6:'time'})
# df[:5]
da = pd.read_csv('../data/tianya_bbs_threads_author_info.txt',
sep = "\t", names = ['author_page','followed_num',\
'fans_num','post_num', \
'comment_num'])
da[:2]
# da=da.rename(columns = {0:'author_page', 1:'followed_num',\
# 2:'fans_num',3:'post_num', \
# 4:'comment_num'})
# # da[:5]
data = pd.concat([df,da], axis=1)
len(data)
data[:3]
"""
Explanation: 分析天涯回帖数据
End of explanation
"""
type(data.time[0])
# extract date from datetime
# date = map(lambda x: x[:10], data.time)
date = [i[:10] for i in data.time]
#date = [i[:10] for i in data.time]
data['date'] = pd.to_datetime(date)
data.date[:3]
# convert str to datetime format
data.time = pd.to_datetime(data.time)
data['month'] = data.time.dt.month
data['year'] = data.time.dt.year
data['day'] = data.time.dt.day
type(data.time[0])
data.describe()
#data.head()
"""
Explanation: Time
End of explanation
"""
import statsmodels.api as sm
"""
Explanation: Statsmodels
http://statsmodels.sourceforge.net/
Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests.
An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.
Researchers across fields may find that statsmodels fully meets their needs for statistical computing and data analysis in Python.
Features include:
Linear regression models
Generalized linear models
Discrete choice models
Robust linear models
Many models and functions for time series analysis
Nonparametric estimators
A collection of datasets for examples
A wide range of statistical tests
Input-output tools for producing tables in a number of formats and for reading Stata files into NumPy and Pandas.
Plotting functions
Extensive unit tests to ensure correctness of results
Many more models and extensions in development
End of explanation
"""
data.describe()
import numpy as np
np.mean(data.click), np.std(data.click), np.sum(data.click)
# 不加权的变量描述
d1 = sm.stats.DescrStatsW(data.click, \
weights=[1 for i in data.click])
d1.mean, d1.var, d1.std, d1.sum
# 加权的变量描述
d1 = sm.stats.DescrStatsW(data.click, weights=data.reply)
d1.mean, d1.var, d1.std, d1.sum
np.median(data.click) # np.percentile
plt.hist(data.click)
plt.show()
plt.hist(data.reply, color = 'green')
plt.show()
plt.hist(np.log(data.click+1), color='green')
plt.hist(np.log(data.reply+1), color='red')
plt.show()
# Plot the height and weight to see
plt.boxplot([np.log(data.click+1)])
plt.show()
# Plot the height and weight to see
plt.boxplot([data.click, data.reply])
plt.show()
def transformData(dat):
results = []
for i in dat:
if i != 'na':
results.append( int(i))
else:
results.append(0)
return results
data.fans_num = transformData(data.fans_num)
data.followed_num = transformData(data.followed_num )
data.post_num = transformData(data.post_num )
data.comment_num = transformData(data.comment_num )
data.describe()
import numpy as np
# Plot the height and weight to see
plt.boxplot([np.log(data.click+1), np.log(data.reply+1),
np.log(data.fans_num+1),\
np.log(data.followed_num + 1)],
labels = ['$Click$', '$Reply$', '$Fans$',\
'$Followed$'])
plt.show()
"""
Explanation: Describe
End of explanation
"""
fig = plt.figure(figsize=(12,4))
data.boxplot(return_type='dict')
plt.yscale('log')
plt.show()
from pandas.tools import plotting
# fig = plt.figure(figsize=(10, 10))
plotting.scatter_matrix(data[['click', 'reply',\
'post_num','comment_num']])
plt.show()
"""
Explanation: Pandas自身已经包含了boxplot的功能
End of explanation
"""
import seaborn # conda install seaborn
seaborn.pairplot(data, vars=['click', 'reply', \
'post_num', 'comment_num'],
kind='reg')
seaborn.pairplot(data, vars=['click', 'reply', 'post_num'],
kind='reg', hue='year')
seaborn.lmplot(y='reply', x='click', data=data, #logx = True,
size = 5)
plt.show()
"""
Explanation: 更多使用pandas.plotting绘图的操作见:
http://pandas.pydata.org/pandas-docs/version/0.15.0/visualization.html
End of explanation
"""
data.year.value_counts()
d = data.year.value_counts()
dd = pd.DataFrame(d)
dd = dd.sort_index(axis=0, ascending=True)
dd
dd.index
dd_date_str = list(map(lambda x: str(x) +'-01-01', dd.index))
dd_date_str
dd_date = pd.to_datetime(list(dd_date_str))
dd_date
plt.plot(dd_date, dd.year, 'r-o')
plt.show()
ds = dd.cumsum()
ds
d = data.year.value_counts()
dd = pd.DataFrame(d)
dd = dd.sort_index(axis=0, ascending=True)
ds = dd.cumsum()
def getDate(dat):
dat_date_str = list(map(lambda x: str(x) +'-01-01', dat.index))
dat_date = pd.to_datetime(dat_date_str)
return dat_date
ds.date = getDate(ds)
dd.date = getDate(dd)
plt.plot(ds.date, ds.year, 'g-s', label = '$Cumulative\: Number\:of\: Threads$')
plt.plot(dd.date, dd.year, 'r-o', label = '$Yearly\:Number\:of\:Threads$')
plt.legend(loc=2,numpoints=1,fontsize=13)
plt.show()
"""
Explanation: values_counts
End of explanation
"""
dg = data.groupby('year').sum()
dg
dgs = dg.cumsum()
dgs
def getDate(dat):
dat_date_str = list(map(lambda x: str(x) +'-01-01', dat.index))
dat_date = pd.to_datetime(dat_date_str)
return dat_date
dg.date = getDate(dg)
fig = plt.figure(figsize=(12,5))
plt.plot(dg.date, dg.click, 'r-o', label = '$Yearly\:Number\:of\:Clicks$')
plt.plot(dg.date, dg.reply, 'g-s', label = '$Yearly\:Number\:of\:Replies$')
plt.plot(dg.date, dg.fans_num, 'b->', label = '$Yearly\:Number\:of\:Fans$')
plt.yscale('log')
plt.legend(loc=4,numpoints=1,fontsize=13)
plt.show()
data.groupby('year')['click'].sum()
data.groupby('year')['click'].mean()
"""
Explanation: groupby
End of explanation
"""
repost = []
for i in df.title:
if u'转载' in i:
repost.append(1)
else:
repost.append(0)
df['repost'] = repost
df.groupby('repost').median()
df['click'][df['repost']==0][:5]
df['click'][df['repost']==1][:5]
from scipy import stats
stats.ttest_ind(np.log(df.click+1), df.repost)
sm.stats.ttest_ind(np.log(df.click+1), df.repost)
# test statistic, pvalue and degrees of freedom
"""
Explanation: 常用的统计分析方法
t检验
卡方检验
相关
回归
T-test
http://statsmodels.sourceforge.net/devel/stats.html
RQ: 转载的文章的点击量是否显著地小于原创的文章?
End of explanation
"""
from scipy.stats import chisquare
chisquare([16, 18, 16, 14, 12, 12], \
f_exp=[16, 16, 16, 16, 16, 8])
"""
Explanation: A chi-squared test
https://en.wikipedia.org/wiki/Chi-squared_test
also referred to as χ² test (or chi-square test), is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true.
A chi-squared test can then be used to reject the null hypothesis that the data are independent.
Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem.
Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance.
Suppose there is a city of 1 million residents with four neighborhoods: A, B, C, and D.
A random sample of 650 residents of the city is taken and their occupation is recorded as "blue collar", "white collar", or "no collar".
The null hypothesis is that each person's neighborhood of residence is independent of the person's occupational classification. The data are tabulated as:
| | A | B |C | D | Total|
| -------------|:-------------:|:-------------:|:-------------:|-----:|-----:|
| White collar| 90 | 60 | 104 |95 | 349|
| Blue collar| 30 | 50 | 51 | 20| 151|
| No coloar| 30 | 40 | 45 | 35|150|
| Total | 150 | 150| 200| 150| 650|
Let us take the sample living in neighborhood A, 150/650, to estimate what proportion of the whole 1 million people live in neighborhood A.
Similarly we take 349/650 to estimate what proportion of the 1 million people are white-collar workers.
By the assumption of independence under the hypothesis we should "expect" the number of white-collar workers in neighborhood A to be
$
\frac{150}{650} \frac{349}{650} 650 = 80.54
$
Then in that "cell" of the table, we have
$\frac{(\text{observed}-\text{expected})^2}{\text{expected}} = \frac{(90-80.54)^2}{80.54}$.
The sum of these quantities over all of the cells is the test statistic.
Under the null hypothesis, it has approximately a chi-square distribution whose number of degrees of freedom are
$ (\text{number of rows}-1)(\text{number of columns}-1) = (3-1)(4-1) = 6. $
If the test statistic is improbably large according to that chi-square distribution, then one rejects the null hypothesis of independence.
scipy.stats.chisquare(f_obs, f_exp=None, ddof=0, axis=0)[source]
Calculates a one-way chi square test.
The chi square test tests the null hypothesis that the categorical data has the given frequencies.
Parameters:
- f_obs : array_like Observed frequencies in each category.
- f_exp : array_like, optional Expected frequencies in each category. By default the categories are assumed to be equally likely.
- ddof : int, optional
End of explanation
"""
from scipy.stats import chi2
# p_value = chi2.sf(chi_statistic, df)
print(chi2.sf(3.5, 5))
print(1 - chi2.cdf(3.5,5))
"""
Explanation:
End of explanation
"""
# np.corrcoef(data.click, data.reply)
np.corrcoef(np.log(data.click+1), \
np.log(data.reply+1))
data.corr()
plt.plot(df.click, df.reply, 'r-o')
plt.show()
plt.plot(df.click, df.reply, 'gs')
plt.xlabel('$Clicks$', fontsize = 20)
plt.ylabel('$Replies$', fontsize = 20)
plt.xscale('log')
plt.yscale('log')
plt.title('$Allowmetric\,Law$', fontsize = 20)
plt.show()
"""
Explanation: Correlation
End of explanation
"""
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
# Load data
dat = sm.datasets.get_rdataset("Guerry", "HistData").data
# Fit regression model (using the natural log of one of the regressors)
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', \
data=dat).fit()
"""
Explanation: Regression
End of explanation
"""
# Inspect the results
print results.summary()
reg = smf.ols('reply ~ click + followed_num', \
data=data).fit()
reg.summary()
reg1 = smf.ols('np.log(reply+1) ~ np.log(click+1) \
+np.log(followed_num+1)+month', data=data).fit()
print reg1.summary()
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(reg1, fig = fig)
plt.show()
import statsmodels.api as sm
from statsmodels.formula.api import ols
moore = sm.datasets.get_rdataset("Moore", "car",
cache=True) # load data
data = moore.data
data = data.rename(columns={"partner.status" :
"partner_status"}) # make name pythonic
data[:5]
"""
Explanation: 有些使用windows的同学无法运行上述代码:
在spyder中打开terminal
输入: pip install -U patsy
https://groups.google.com/forum/#!topic/pystatsmodels/KcSzNqDxv-Q
End of explanation
"""
|
henchc/Data-on-the-Mind-2017-scraping-apis | 02-Scraping/solutions/01-BS_solutions.ipynb | mit | import requests # to make GET request
from bs4 import BeautifulSoup # to parse the HTML response
import time # to pause between calls
import csv # to write data to csv
import pandas # to see CSV
"""
Explanation: Webscraping with Beautiful Soup
In this lesson we'll learn about various techniques to scrape data from websites. This lesson will include:
Discussion of complying with Terms of Use
Using Python's BeautifulSoup library
Collecting data from one page
Following collected links
Exporting data to CSV
0. Terms of Use
We'll be scraping information on the state senators of Illinois, as well as the list of bills from the Illinois General Assembly. Your first step before scraping should always be to read the Terms of Use or Terms of Agreement for a website. Many websites will explicitly prohibit scraping in any form. Moreover, if you're affiliated with an institution, you may be breaching existing contracts by engaging in scraping. UC Berkeley's Library recommends following this workflow:
While our source's Terms of Use do not explicitly prohibit scraping (nor do their robots.txt), it is advisable to still contact the web administrator of the website. We will not be placing too much stress on their servers today, so please keep this in mind while following along and executing the code. You should always attempt to contact the web administrator of the site you plan to scrape. Oftentimes there is an easier way to get the data that you want.
Let's go ahead and import the Python libraries we'll need:
End of explanation
"""
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp')
# read the content of the server’s response as a string
page_source = response.text
print(page_source[:1000])
"""
Explanation: 1. Using BeautifulSoup
1.1 Make a GET request and parse the HTML response
We use the requests library just as we did with APIs, but this time we won't get JSON or XML back, but we'll get an HTML response.
End of explanation
"""
# parse the response into an HTML tree soup object
soup = BeautifulSoup(page_source, 'html5lib')
# take a look
print(soup.prettify()[:1000])
"""
Explanation: 1.2 soup it
Now we use the BeautifulSoup function to make an object of the response, which allows us to parse the HTML tree. This returns an object (called a soup object) with all of the HTML in the original document.
End of explanation
"""
soup.find_all("a")
"""
Explanation: 1.3 Find Elements
BeautifulSoup has a number of functions to find things on a page. Like other scraping tools, BeautifulSoup lets you find elements by their:
HTML tags
HTML Attributes
CSS Selectors
Let's search first for HTML tags.
The function find_all searches the soup tree to find all the elements with a particular HTML tag, and returns all of those elements.
What does the example below do?
End of explanation
"""
soup("a")
"""
Explanation: NB: Because find_all() is the most popular method in the BeautifulSoup search library, you can use a shortcut for it. If you treat the BeautifulSoup object as though it were a function, then it’s the same as calling find_all() on that object.
End of explanation
"""
# get only the 'a' tags in 'sidemenu' class
soup("a", class_="sidemenu")
"""
Explanation: That's a lot! Many elements on a page will have the same HTML tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
We can do this by adding an additional argument to the find_all. In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu".
End of explanation
"""
# get elements with "a.sidemenu" CSS Selector.
soup.select("a.sidemenu")
"""
Explanation: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector.
In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu.
End of explanation
"""
# your code here
soup.select("a.mainmenu")
"""
Explanation: Using CSS is one way to organize how we stylize a website. They allow us to categorize and label certain HTML elements, and use these categories and labels to apply specfic styling. CSS selectors are what we use to identify these elements, and then decide what style to apply. We won't have time today to go into detail about HTML and CSS, but it's worth talking about the three most important CSS selectors:
element selector: simply including the element type, such as a above, will select all elements on the page of that element type. Try using your development tools (Chrome, Firefox, or Safari) to change all elements of the type a to a background color of red.
a {
background-color: red
}
class selector: if you put a period (.) before the name of a class, all elements belonging to that class will be selected. Try using your development tools to change all elements of the class detail to a background color of red.
.detail {
background-color: red
}
ID selector: if you put a hashtag (#) before the name of an id, all elements with that id will be selected. Try using the development tools to change all elements with the id Senate to a background color of red.
```
Senate {
background-color: red
}
```
The above three examples will take all elements with the given property, but oftentimes you only want certain elements within the hierarchy. We can do that by simply placing elements side-by-side separated by a space.
Challenge 1
Using your developer tools, change the background-color of all a elements in only the "Current Senate Members" table.
tr tr tr a {
background-color: red
}
Challenge 2
Find all the <a> elements in class mainmenu
End of explanation
"""
# this is a list
soup.select("a.sidemenu")
# we first want to get an individual tag object
first_link = soup.select("a.sidemenu")[0]
# check out its class
print(type(first_link))
"""
Explanation: 1.4 Get Attributes and Text of Elements
Once we identify elements, we want to access information in that element. Oftentimes this means two things:
Text
Attributes
Getting the text inside an element is easy. All we have to do is use the text member of a tag object:
End of explanation
"""
print(first_link.text)
"""
Explanation: It's a tag! Which means it has a text member:
End of explanation
"""
print(first_link.text.strip())
"""
Explanation: You'll see there is some extra spacing here, we can use the strip method to remove that:
End of explanation
"""
print(first_link['href'])
"""
Explanation: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes.
You can access a tag’s attributes by treating the tag like a dictionary:
End of explanation
"""
# your code here
rel_paths = [link['href'] for link in soup.select("a.mainmenu")]
print(rel_paths)
"""
Explanation: Nice, but that doesn't look like a full URL! Don't worry, we'll get to this soon.
Challenge 3
Find all the href attributes (url) from the mainmenu by writing a list comprehension and assign to it rel_paths.
End of explanation
"""
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
"""
Explanation: 2. Collecting information
Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape the 98th general assembly.
Our goal is to scrape information on each senator, including their:
* name
* district
* party
2.1 First, make the GET request and soup it
End of explanation
"""
# get all tr elements
rows = soup.find_all("tr")
print(len(rows))
"""
Explanation: 2.2 Find the right elements and text
Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag.
End of explanation
"""
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
print(rows[2].prettify())
"""
Explanation: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want.
End of explanation
"""
# select only those 'td' tags with class 'detail'
row = rows[2]
detail_cells = row.select('td.detail')
detail_cells
"""
Explanation: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above.
End of explanation
"""
# Keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
print(row_data)
"""
Explanation: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member.
End of explanation
"""
# check it out
print(row_data[0]) # name
print(row_data[3]) # district
print(row_data[4]) # party
"""
Explanation: Now we can combine the BeautifulSoup tools with our basic python skills to scrape an entire web page.
End of explanation
"""
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
detail_cells = row.select('td.detail')
# get rid of junk rows
if len(detail_cells) is not 5:
continue
# keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
# collect information
name = row_data[0]
district = int(row_data[3])
party = row_data[4]
# store in a tuple
tup = (name, district, party)
# append to list
members.append(tup)
print(len(members))
print()
print(members)
"""
Explanation: 2.3 Loop it all together
Challenge 4
Let's use a for loop to get 'em all! We'll start at the beginning with the request:
End of explanation
"""
# your code here
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# Create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
detail_cells = row.select('td.detail')
# get rid of junk rows
if len(detail_cells) is not 5:
continue
# keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
# collect information
name, district, party = row_data[0], int(row_data[3]), row_data[4]
# add href
href = row.select('a')[1]['href']
# add full path
full_path = "http://www.ilga.gov/senate/" + href + "&Primary=True"
# store in a tuple
tup = (name, district, party, full_path)
# append to list
members.append(tup)
members[:5]
"""
Explanation: Challenge 5: Get HREF element pointing to members' bills
The code above retrieves information on:
the senator's name
their district number
and their party
We now want to retrieve the URL for each senator's list of bills. The format for the list of bills for a given senator is:
http://www.ilga.gov/senate/SenatorBills.asp + ? + GA=98 + &MemberID=memberID + &Primary=True
to get something like:
http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True
You should be able to see that, unfortunately, memberID is not currently something pulled out in our scraping code.
Your initial task is to modify the code above so that we also retrieve the full URL which points to the corresponding page of primary-sponsored bills, for each member, and return it along with their name, district, and party.
Tips:
To do this, you will want to get the appropriate anchor element (<a>) in each legislator's row of the table. You can again use the .select() method on the row object in the loop to do this — similar to the command that finds all of the td.detail cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page.
The anchor elements' HTML will look like <a href="/senate/Senator.asp/...">Bills</a>. The string in the href attribute contains the relative link we are after. You can access an attribute of a BeatifulSoup Tag object the same way you access a Python dictionary: anchor['attributeName']. (See the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#tag">documentation</a> for more details). There are a lot of different ways to use BeautifulSoup to get things done; whatever you need to do to pull that href out is fine.
Since we will only get a relative link, you'll have to do some concatenating to get the full URLs.
Use the code you wrote in Challenge 4 and simply add the full path to the tuple
End of explanation
"""
# your code here
def get_bills(url):
# make the GET request
response = requests.get(url)
page_source = response.text
soup = BeautifulSoup(page_source, "html5lib")
# get the table rows
rows = soup.select('tr tr tr')
# make empty list to collect the info
bills = []
for row in rows:
# get columns
detail_cells = row.select('td.billlist')
if len(detail_cells) is not 5:
continue
# get text in each column
row_data = [cell.text for cell in row]
# append data in columns 2-5
bills.append(tuple(row_data[2:6]))
return(bills)
# uncomment to test your code:
test_url = members[0][3]
print(test_url)
get_bills(test_url)[0:5]
"""
Explanation: Cool! Now you can probably guess how to loop it all together by iterating through the links we just extracted.
3. Following links to scrape bills
3.1 Writing a scraper function
Now we want to scrape the webpages corresponding to bills sponsored by each senator.
Challenge 6
Write a function called get_bills(url) to parse a given bill's URL. This will involve:
requesting the URL using the <a href="http://docs.python-requests.org/en/latest/">requests</a> library
using the features of the BeautifulSoup library to find all of the <td> elements with the class billlist
return a list of tuples, each with:
description (2nd column)
chamber (S or H) (3rd column)
the last action (4th column)
the last action date (5th column)
I've started the function for you. Fill in the rest.
End of explanation
"""
bills_info = []
for member in members[:3]: # only go through 3 members
print(member[0])
member_bills = get_bills(member[3])
for b in member_bills:
bill = list(member) + list(b)
bills_info.append(bill)
time.sleep(5)
bills_info
"""
Explanation: 3.2 Get all the bills
Finally, we create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs.
NOTE: Please call the function time.sleep(5) for each iteration of the loop, so that we don't destroy the state's web site.
End of explanation
"""
# manually decide on header names
header = ['Senator', 'District', 'Party', 'Bills Link', 'Description', 'Chamber', 'Last Action', 'Last Action Date']
with open('all-bills.csv', 'w') as output_file:
csv_writer = csv.writer(output_file)
csv_writer.writerow(header)
csv_writer.writerows(bills_info)
pandas.read_csv('all-bills.csv')
"""
Explanation: 4. Export to CSV
We can write this to a CSV too:
End of explanation
"""
|
f-guitart/data_mining | exercises/exercise1.ipynb | gpl-3.0 | df1 = pd.read_csv("../data/iqsize.csv")
# we can apply head method, it will return the n first rows
# n = 5 as a default value
df1.head(10)
"""
Explanation: Load iqsize.csv using pd.read_csv
End of explanation
"""
print("Columns: {}".format(df1.columns))
print("Rows: {}".format(df1.index))
"""
Explanation: Print both column and row indexes
End of explanation
"""
df1.shape
"""
Explanation: One common property of the dataframe to check is the size (in terms of number of rows and columns)
End of explanation
"""
for column in df1.columns:
print("column name: {} - dtype: {}".format(column, df1[column].dtype))
"""
Explanation: Print column dtype an think if they have the proper variable type
Series are iterable, so we can iterate through df.columns and then check the dtype of each series
End of explanation
"""
df1.dtypes
"""
Explanation: This is a correct way of checking variable names, note:
* If we want to analyse the results, we have to create new structures to store the values
* We are iterating using a for loop and wirting custom code
We can access directly DataFrame dtypes this way
End of explanation
"""
type(df1.dtypes)
"""
Explanation: The result is a series, so we can still do transformations on the result
End of explanation
"""
df1[df1.dtypes[df1.dtypes == np.int64].index].head()
"""
Explanation: We can use df1.dtypes to select only integer columns
End of explanation
"""
result1 = df1.dtypes == np.int64
display(result1)
type(result1)
"""
Explanation: What happened here?
From inner to outer operations:
0. note that df1.dtypes is a Series object
1. we check which dtypes are int64 just comparing all values to np.int64. The equality operation is performed to all elementos of df1.dtypes. This operation returns a Boolean Series object
End of explanation
"""
df1.dtypes[result1]
"""
Explanation: we slice df1.dtypes Series using result1 series. Note that the result is a Series containing the values of the dtypes of the original dataframe
End of explanation
"""
result2 = df1.dtypes[result1].index
display(result2)
"""
Explanation: If we want to select the columns instead of the values, we need the indexes
End of explanation
"""
df1.columns[result1]
"""
Explanation: We can get the same result using result1 to slice df1.columns
End of explanation
"""
df1[result2].head()
"""
Explanation: Once we have the column names we just need to slice the original dataframe using result2
End of explanation
"""
df1.dtypes
"""
Explanation: Think if they have the proper variable type
End of explanation
"""
df1.iloc[:,1].head()
"""
Explanation: Use DataFrame.loc and DataFrame.iloc to (in each case print the result and check what data type you obtain as reponse):
Get the second column
Using iloc:
End of explanation
"""
df1.columns
df1.columns[1]
df1.loc[:,'piq'].head()
df1.loc[:,df1.columns[1]].head()
print(type(df1.loc[:,df1.columns[1]].head()))
"""
Explanation: We can also use loc
End of explanation
"""
df1.iloc[3,:]
"""
Explanation: B. Get the third row
Using iloc
End of explanation
"""
df1.loc[3,:]
print(type(df1.loc[3,:]))
"""
Explanation: Using loc
End of explanation
"""
df1.columns
df1.columns[:-1]
df1.loc[:,df1.columns[:-1]].head()
print(type(df1.loc[:,df1.columns[:-1]].head()))
"""
Explanation: C. Get all but last column
End of explanation
"""
df1.iloc[4:11,:]
print(type(df1.iloc[4:11,:]))
"""
Explanation: D. Get rows from 4 to 10
End of explanation
"""
df1.iloc[2:4,3:5]
print(type(df1.iloc[2:4,3:5]))
"""
Explanation: E. Get values from columns 2 and 3 containing 3 and 4 row values
Using iloc
End of explanation
"""
df1.columns
df1.columns[3:5]
df1.loc[2:3,["height","weight"]]
df1.loc[2:3,df1.columns[3:5]]
"""
Explanation: Using loc
End of explanation
"""
df1.loc[df1.loc[:,"weight"] > 136,"weight"].head()
"""
Explanation: F. Get all iq values grater than 100
End of explanation
"""
(df1.loc[df1.loc[:,"weight"] > 136,"weight"] / 100).head()
"""
Explanation: G. Divide previous results by 100
End of explanation
"""
df1.describe()
"""
Explanation: Extra: methods for quantitative variables
DataFrame.describe() returns a set of statistical measures of the quantitative variables of the dataset.
End of explanation
"""
df1["brain"].max()
df1["brain"].mean()
df1["brain"].std()
"""
Explanation: We can as well apply the methods for de computation of each one of the statistics (e.g. max, mean, std)
End of explanation
"""
s = df1["sex"]
s.value_counts()
"""
Explanation: Extra: methods for qualitative variables
End of explanation
"""
|
irsisyphus/machine-learning | 3 Kernel, Bayes and Models.ipynb | apache-2.0 | %load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,scipy,sklearn
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
"""
Explanation: Assignment 3 - basic classifiers
Math practice and coding application for main classifiers introduced in Chapter 3 of the Python machine learning book.
Weighting
Note that this assignment is more difficult than the previous ones, and thus has a higher weighting 3 and longer duration (3 weeks). Each one of the previous two assignments has a weighting 1.
Specifically, the first 3 assignments contribute to your continuous assessment as follows:
Assignment weights: $w_1 = 1, w_2 = 1, w_3 = 3$
Assignment grades: $g_1, g_2, g_3$
Weighted average: $\frac{1}{\sum_i w_i} \times \sum_i \left(w_i \times g_i \right)$
Future assignments will be added analogously.
RBF kernel (20 points)
Show that a Gaussian RBF kernel can be expressed as a dot product:
$$
K(\mathbf{x}, \mathbf{y})
= e^\frac{-|\mathbf{x} - \mathbf{y}|^2}{2}
= \phi(\mathbf{x})^T \phi(\mathbf{y})
$$
by spelling out the mapping function $\phi$.
For simplicity
* you can assume both $\mathbf{x}$ and $\mathbf{y}$ are 2D vectors
$
x =
\begin{pmatrix}
x_1 \
x_2
\end{pmatrix}
, \;
y =
\begin{pmatrix}
y_1 \
y_2
\end{pmatrix}
$
* we use a scalar unit variance here
even though the proof can be extended for vectors $\mathbf{x}$ $\mathbf{y}$ and general covariance matrices.
Hint: use Taylor series expansion of the exponential function
Answer
We denote $e^x$ as exp($x$). Since $
\mathbf x =
\begin{pmatrix}
x_1 \
x_2
\end{pmatrix}
, \;
\mathbf y =
\begin{pmatrix}
y_1 \
y_2
\end{pmatrix}
$, we have
$$
\begin{align}
K(\mathbf{x}, \mathbf{y}) = \text{exp}(\frac{-||\mathbf{x} - \mathbf{y}||^2}{2}) & = \text{exp}(\frac{-||(x_1-y_1, x_2-y_2)||^2}{2} )\
& = \text{exp}(\frac{-(x_1-y_1)^2-(x_2-y_2)^2}{2})\
& = \text{exp}(\frac{-{x_1}^2-{x_2}^2-{y_1}^2-{y_2}^2+2 x_1 y_1+2 x_2 y_2}{2}) \
& = \text{exp}(\frac{-||\mathbf{x}||^2}{2}) \text{ exp}(\frac{-||\mathbf{y}||^2}{2}) \text{ exp}(x_1 y_1 + x_2 y_2)
\end{align}
$$
<br>By Taylor series of $f(x)$ on $a$, $e^x$ at $a=0$ can be expressed as $\sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} +$ ... for $x \in \mathbb{R}^1$. Therefore,<br><br>
$$K(\mathbf{x}, \mathbf{y}) = \text{exp}(\frac{-||\mathbf{x}||^2}{2}) \text{ exp}(\frac{-||\mathbf{y}||^2}{2}) \sum_{n=0}^{\infty} \frac{(x_1 y_1 + x_2 y_2)^n}{n!}$$
<br>By binomial expansion, we have
$$ (x_1 y_1 + x_2 y_2)^n = \sum_{i = 0}^{n} \binom{n}{i} (x_1 y_1)^{n-i} (x_2 y_2)^i = \sum_{i = 0}^{n} \sqrt{\binom{n}{i}} (x_1^{n-i} x_2^i) \sqrt{\binom{n}{i}} (y_1^{n-i} y_2^i)$$
<br>We let $\xi_{n}(\mathbf{x}) = \xi_{n}(x_1, x_2) = \left[\sqrt{\binom{n}{i}} (x_1^{n-i} x_2^i) \right] = \left[\sqrt{\binom{n}{0}} (x_1^{n} x_2^0), \sqrt{\binom{n}{1}} (x_1^{n-1} x_2^1), ..., \sqrt{\binom{n}{n}} (x_1^{0} x_2^n)\right] \in \mathbb{R}^{\text{n}}$. <br>
<br>Hence we have <br>
$$ \left{
\begin{aligned}
\phi(\mathbf{x}) & = \text{exp}(\frac{-||\mathbf{x}||^2}{2}) \left[1, \frac{\xi_{1}(\mathbf{x})}{\sqrt{1!}}, \frac{\xi_{2}(\mathbf{x})}{\sqrt{2!}} , \frac{\xi_{3}(\mathbf{x})}{\sqrt{3!}} , ... \right]^T \
\phi(\mathbf{y}) & = \text{exp}(\frac{-||\mathbf{y}||^2}{2}) \left[1, \frac{\xi_{1}(\mathbf{y})}{\sqrt{1!}}, \frac{\xi_{2}(\mathbf{y})}{\sqrt{2!}} , \frac{\xi_{3}(\mathbf{y})}{\sqrt{3!}} , ... \right]^T
\end{aligned}
\right.
$$
<br>The mapping function is therefore $\phi(\mathbf{x}) = \text{exp}(\frac{-||\mathbf{x}||^2}{2}) \left[1, \frac{\xi_{1}(\mathbf{x})}{\sqrt{1!}}, \frac{\xi_{2}(\mathbf{x})}{\sqrt{2!}} , \frac{\xi_{3}(\mathbf{x})}{\sqrt{3!}} , ... \right]^T$, where $\xi_{n}(\mathbf{x}) = \left[\sqrt{\binom{n}{0}} (x_1^{n} x_2^0), \sqrt{\binom{n}{1}} (x_1^{n-1} x_2^1), ..., \sqrt{\binom{n}{n}} (x_1^{0} x_2^n)\right]$.
Kernel SVM complexity (10 points)
How would the complexity (in terms of number of parameters) of a trained kernel SVM change with the amount of training data, and why?
Note that the answer may depend on the specific kernel used as well as the amount of training data.
Consider specifically the following types of kernels $K(\mathbf{x}, \mathbf{y})$.
* linear:
$$
K\left(\mathbf{x}, \mathbf{y}\right) = \mathbf{x}^T \mathbf{y}
$$
* polynomial with degree $q$:
$$
K\left(\mathbf{x}, \mathbf{y}\right) =
(\mathbf{x}^T\mathbf{y} + 1)^q
$$
* RBF with distance function $D$:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{D\left(\mathbf{x}, \mathbf{y} \right)}{2s^2}}
$$
Answer
For all examples, we assume $\mathbf{x}, \mathbf{y} \in \mathbb{R}^\text{d}$.
Linear:
For linear kernal, the mapping function $\phi(\mathbf{x}) = \mathbf{x}$, which mapps $\mathbb{R}^\text{d}$ to $\mathbb{R}^\text{d}$, therefore the size of data is unchanged.<br>
There are not explicit parameters, therefore the time cost increase linearly with the dimension of data, or the amount of data increase $n$ times, the time cost simply increase $O(n)$ time. Both changes in dimension or data amount will not afftect any parameters.<br>
Polynomial with degree $q$:
For simplicity we write $1 = x_{d+1} y_{d+1}$. Then
$$K\left(\mathbf{x}, \mathbf{y}\right) =(\mathbf{x}^T\mathbf{y} + 1)^q = (\sum_{i=1}^{d+1} x_i y_i)^q = \sum_{k_1 + k_2 + ... + k_{d+1} = q} \binom{q}{k_1, k_2, ..., k_{d+1}} \prod_{t=1}^{d+1} (x_t y_t)^{k_t} = \sum_{\sum_{i=1}^{d+1} k_i = q} \frac{q!}{\prod_{i=1}^{d+1} k_i!} \prod_{t=1}^{d+1} (x_t y_t)^{k_t}$$
by Multinomial theorem. Therefore the mapping function is
$$\phi(\mathbf{x}) = \left[\sqrt{\frac{q!}{\prod_{i=1}^{d+1} k_i!}} \prod_{t=1}^{d+1} (x_t)^{k_t}\right]{\sum{i=1}^{d+1} k_i = q}^T,$$
which maps $\mathbb{R}^\text{d}$ to $\mathbb{R}^\binom{p+(d+1)-1}{(d+1)-1} = \mathbb{R}^\binom{p+d}{d} = \mathbb{R}^\frac{(p+d)!}{p! d!}$, computed using the stars and bars method. <br>
If $p=1$, only one useless dimension is added, where $x_{d+1} = 1$. In this case the actual dimension remains. <br>
If $p>2$, then the dimension increases from $d$ to $\binom{p+d}{d}$, where actural dimension is $\binom{p+d}{d} - 1$ since we always have a $x_{d+1}^q = 1$ term.<br><br>
Now we consider the parameters.
* For each entry in $K\left(\mathbf{x}, \mathbf{y}\right)$, we have a parameter $\frac{q!}{\prod_{i=1}^{d+1} k_i!} = \binom{q}{k_1, k_2, ..., k_{d+1}}$, which takes $O(q \prod_{t=1}^{d+1} k_t)$ to compute in brute force. Considering the dimension analysis we discuss above, the greater the dimension or the greater $q$ is, the more parameters and greater time complexity we will have in the kernal function.<br>
* However, since $q$ and $k_i$ are identical for any set of input data, increasing amount of data will not change number of parameter to be calculated (because they only need to be calculated once), although multiplying them to each term of $x$ and $y$ takes constant time.<br>
* If we do $\mathbf{x}^T \mathbf{y} + 1$ first and then do the power function, then the parameter analysis is the same as the linear function, except that we need an extra power $q$ after the $\mathbf{x}^T \mathbf{y} + 1$.<br>
RBF with distance function $D$:
Assume $D(\mathbf{x}, \mathbf{y}) = \omega(\mathbf{x}) \omega(\mathbf{y})$. For $K(\mathbf{x}, \mathbf{y} ) = e^{-\frac{D\left(\mathbf{x}, \mathbf{y} \right)}{2s^2}} = e^{-\frac{\omega(\mathbf{x}) \omega(\mathbf{y})}{2s^2}} = e^{-\frac{1}{2s^2} \omega(\mathbf{x}) \omega(\mathbf{y})}$, we have the mapping function as
$$
\phi(\mathbf{x}) = e^{-\frac{1}{4s^2}} \left[1, \frac{\omega(\mathbf{x})}{\sqrt {1!}}, \frac{\omega(\mathbf{x})^2}{\sqrt {2!}}, \frac{\omega(\mathbf{x})^3}{\sqrt {3!}}, ... \right]^T,
$$
which maps $\mathbb{R}^\text{d}$ to $\mathbb{R}^\infty$. That is, RBF essentially projects the original vector to an infinite dimensional space.<br><br>
Now we consider the parameters.
* We first clarify that although RBF maps $\mathbb{R}^\text{d}$ to $\mathbb{R}^\infty$, the dimension actually used is determined by the explicit function $K(\mathbf{x}, \mathbf{y})$, because we don't have to separate the mapping function. Instead, we can just compute the kernal $K$ directly.<br>
* As calculating exp($\mathbf{x}$) is simply mapping to an exponential function, the main cost in terms of dimension of data is at the distance function $D(\mathbf{x}, \mathbf{y})$, which varies with different distance functions we choose. For example, if we choose simple metrics such as Taxicab distance and Euclidean distance, the cost is relatively small. However, if we choose complex metrics for some reason, then the time cost could be huge.<br>
* Also, if all input data share the same set of parameters, then the parameters only need to be computed once, and applyed to each set of input data with constant time. However, if parameters change with different sets of input data in a specific kernal, then the number of parameters as well as time complexity also increase linearly with the increase of amount of data.
Gaussian density Bayes (30 points)
$$
p\left(\Theta | \mathbf{X}\right)
=
\frac{p\left(\mathbf{X} | \Theta\right) p\left(\Theta\right)}{p\left(\mathbf{X}\right)}
$$
Assume both the likelihood and prior have Gaussian distributions:
$$
\begin{align}
p(\mathbf{X} | \Theta)
&=
\frac{1}{(2\pi)^{N/2}\sigma^N} \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)
\
p(\Theta)
&=
\frac{1}{\sqrt{2\pi}\sigma_0} \exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right)
\end{align}
$$
Derive $\Theta$ from the dataset $\mathbf{X}$ via the following methods:
ML (maximum likelihood) estimation
$$
\Theta_{ML} = argmax_{\Theta} p(\mathbf{X} | \Theta)
$$
MAP estimation
$$
\begin{align}
\Theta_{MAP}
&=
argmax_{\Theta} p(\Theta | \mathbf{X})
\
&=
argmax_{\Theta} p(\mathbf{X} | \Theta) p(\Theta)
\end{align}
$$
Bayes estimation
$$
\begin{align}
\Theta_{Bayes}
&=
E(\Theta | \mathbf{X})
\
&=
\int \Theta p(\Theta | \mathbf{X}) d\Theta
\end{align}
$$
Answer
1. ML (maximum likelihood) estimation
To maximize $p(\mathbf{X} | \Theta)$, we set $\nabla_\Theta p(\mathbf{X} | \Theta) = \nabla_\Theta \left(\frac{1}{(2\pi)^{N/2}\sigma^N} \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)\right) = 0$. <br>
By Chain rule we get<br>
$$
\begin{align}
\nabla_\Theta p(\mathbf{X} | \Theta) & = \frac{1}{(2\pi)^{N/2}\sigma^N} \frac{\partial \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)}{\partial \left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)} \frac{\partial \left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)}{\partial \Theta}\
0 & = \frac{1}{(2\pi)^{N/2}\sigma^N} \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right) \left( - \frac{\sum_{t=1}^N -2(\mathbf{x}^{(t)} - \Theta)}{2\sigma^2} \right) \
\end{align}
$$
Note that $p(\mathbf{X} | \Theta) = \frac{1}{(2\pi)^{N/2}\sigma^N} \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)$ is non-zero, because $e^y$ is always positive for $y \in \mathbb{R}$, and the constant $\frac{1}{(2\pi)^{N/2}\sigma^N}$ is positive. <br>
Then we have<br>
$$
\begin{align}
0 & = \frac{\sum_{t=1}^N 2(\mathbf{x}^{(t)} - \Theta)}{2\sigma^2} = \frac{\sum_{t=1}^N \mathbf{x}^{(t)} - \Theta}{\sigma^2} \
0 & = \frac{(\sum_{t=1}^N \mathbf{x}^{(t)}) - N\Theta}{\sigma^2} \
N\Theta & = \sum_{t=1}^N \mathbf{x}^{(t)} \
\Theta & = \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}
\end{align}
$$
Hence $$\Theta_{ML} = argmax_{\Theta} p(\mathbf{X} | \Theta) = \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}.$$
2. MAP estimation
To maximize $p(\mathbf{X} | \Theta) p(\Theta)$, we set $\nabla_\Theta (p(\mathbf{X} | \Theta) p(\Theta)) = p(\Theta)\nabla_\Theta p(\mathbf{X} | \Theta) + p(\mathbf{X} | \Theta)\nabla_\Theta p(\Theta) = 0$. <br><br>
We get $p(\Theta)\nabla_\Theta p(\mathbf{X} | \Theta) = - p(\mathbf{X} | \Theta)\nabla_\Theta p(\Theta) $ and therefore <br><br>
$$
\begin{align}
\frac{1}{\sqrt{2\pi}\sigma_0} \exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right) \nabla_\Theta \left(\frac{1}{(2\pi)^{N/2}\sigma^N} \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)\right) & \
= - \frac{1}{(2\pi)^{N/2}\sigma^N} \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right) \nabla_\Theta & \left(\frac{1}{\sqrt{2\pi}\sigma_0} \exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right)\right)
\end{align}
$$
<br><br>By Removing the constants we get<br><br>
$$
\begin{align}
\exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right) \nabla_\Theta \left(\exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)\right) & = - \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right) \nabla_\Theta \left(\exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right)\right) \
\end{align}
$$
<br>By Chain rule we get<br><br>
$$
\begin{align}
\exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right) \frac{\partial \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)}{\partial \left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)} \frac{\partial \left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right)}{\partial \Theta} & = - \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right) \frac{\partial \exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right)}{\partial \left(-\frac{(\Theta - \mu_0)^2}{2\sigma_0^2}\right)} \frac{\partial \left(-\frac{(\Theta - \mu_0)^2}{2\sigma_0^2}\right)}{\partial \Theta}\
\exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right) \exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right) \left( - \frac{\sum_{t=1}^N -2(\mathbf{x}^{(t)} - \Theta)}{2\sigma^2} \right) & = -\exp\left(-\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)^2}{2\sigma^2}\right) \exp\left( -\frac{(\Theta - \mu_0)^2}{2\sigma_0^2} \right) \left(-\frac{2(\Theta - \mu_0)}{2\sigma_0^2}\right)
\end{align}
$$
<br>Since $e^y$ is always positive for $y \in \mathbb{R}$, the first two terms of both sides are non-zero. Dividing them on both sides we have<br>
$$
\begin{align}
- \frac{\sum_{t=1}^N -2(\mathbf{x}^{(t)} - \Theta)}{2\sigma^2} & = \frac{2(\Theta - \mu_0)}{2\sigma_0^2}\
\frac{\sum_{t=1}^N (\mathbf{x}^{(t)} - \Theta)}{\sigma^2} & = \frac{\Theta -\mu_0}{\sigma_0^2}\
\frac{(\sum_{t=1}^N \mathbf{x}^{(t)}) - N\Theta}{\sigma^2} & = \frac{\Theta -\mu_0}{\sigma_0^2}\
(\sum_{t=1}^N \mathbf{x}^{(t)})\sigma_0^2 - N\Theta\sigma_0^2 & = \sigma^2 \Theta - \sigma^2 \mu_0 \
(\sum_{t=1}^N \mathbf{x}^{(t)})\sigma_0^2 + \sigma^2 \mu_0 & = (N\sigma_0^2 + \sigma^2)\Theta \
\end{align}
$$
<br>Hence <br>
$$
\begin{align}
\Theta_{MAP} & = argmax_{\Theta} p(\Theta | \mathbf{X}) \
& = argmax_{\Theta} p(\mathbf{X} | \Theta) p(\Theta) \
& = \frac{(\sum_{t=1}^N \mathbf{x}^{(t)})\sigma_0^2 + \sigma^2 \mu_0}{N\sigma_0^2 + \sigma^2} \
\end{align}
$$<br>
Furthurmore, since $\Theta_{ML} = \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}$, we have<br>
$$\Theta_{MAP} = \frac{(\sum_{t=1}^N \mathbf{x}^{(t)})\sigma_0^2 + \sigma^2 \mu_0}{N\sigma_0^2 + \sigma^2}
= \frac{N\Theta_{ML}\sigma_0^2 + \sigma^2 \mu_0}{N\sigma_0^2 + \sigma^2} = \frac{N/\sigma^2}{N/\sigma^2 + 1/\sigma_0^2} \Theta_{ML} +\frac{1/\sigma_0^2}{N/\sigma^2 + 1/\sigma_0^2} \mu_0$$
3. Bayes estimation
For $\Theta_{Bayes} = E(\Theta | \mathbf{X}) = \int \Theta p(\Theta | \mathbf{X}) d\Theta \$, since $p(\Theta | \mathbf{X}) = \frac{p(\mathbf{X}| \Theta) p(\Theta)}{p(\mathbf{X})}$ and $p(\mathbf{X})$ is a constant for given $\mathbf{X}$, our interest is in $p(\mathbf{X}| \Theta) p(\Theta)$. We denote $\phi(x, \mu, \sigma^2)$ as the normal distribution with input $x$, mean $\mu$ and standard deviation $\sigma$. Then $$p(\mathbf{X}| \Theta) p(\Theta) = \phi(\Theta, \mu_0, \sigma_0^2) \prod_{i=1}^N \phi(\Theta, \mathbf{x}^{(i)}, \sigma^2).$$
Notice that $$\phi(x, \mu_1, \sigma_1^2) \phi(x, \mu_2, \sigma_2^2) = \phi(\mu_1, \mu_2, \sigma_1^2 + \sigma_2^2) \phi(x, \mu_i, \sigma_i^2)$$ where $$\mu_i = \frac{1 / \sigma_1^2}{1/ \sigma_1^2 + 1/ \sigma_2^2}\mu_1 + \frac{1 / \sigma_2^2}{1/ \sigma_1^2 + 1/ \sigma_2^2}\mu_2 \text{ and } \sigma_i^2 = \frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2 + \sigma_2^2}.$$<br>
We will prove the formula later on. Using this formuma, we have $$\phi(\Theta, \mathbf{x}^{(a)}, \sigma^2) \phi(\Theta, \mathbf{x}^{(b)}, \sigma^2) = \phi(\mathbf{x}^{(a)}, \mathbf{x}^{(b)}, 2\sigma^2) \phi(\Theta, \frac{\mathbf{x}^{(a)}+\mathbf{x}^{(b)}}{2}, \frac{\sigma^2}{2}) = C_0 \phi(\Theta, \frac{\mathbf{x}^{(a)}+\mathbf{x}^{(b)}}{2}, \frac{\sigma^2}{2}),$$ where $C_0$ is some constant since all variables of $\phi(\mathbf{x}^{(a)}, \mathbf{x}^{(b)}, 2\sigma^2)$ are set. Following similar steps we get
$$\prod_{i=1}^N \phi(\Theta, \mathbf{x}^{(i)}, \sigma^2) = C_1 \phi(\Theta, \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}, \frac{\sigma^2}{N}),$$ where $C_1$ is some constant.<br>
Hence, $$p(\mathbf{X}| \Theta) p(\Theta) = C_1 \phi(\Theta, \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}, \frac{\sigma^2}{N}) \phi(\Theta, \mu_0, \sigma_0^2) = C_2 \phi(\Theta, \mu_\text{new}, \sigma_\text{new}^2),$$ where $C_2$ is some constant and by the formula, $\mu_\text{new} = \frac{N/\sigma^2}{N/\sigma^2 + 1/\sigma_0^2} \Theta_{ML} +\frac{1/\sigma_0^2}{N/\sigma^2 + 1/\sigma_0^2} \mu_0 $, where $\Theta_{ML} = \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}.$ <br>
Notice that for a given normal distribution, multiplying the probability density function by a constant will not change its mean value. Therefore the expectation of $p(\Theta | \mathbf{X})$ is exactly the expectation of the non-constant normal distribution part. Hence,
$$
E(\Theta | \mathbf{X}) = \mu_\text{new} = \frac{N/\sigma^2}{N/\sigma^2 + 1/\sigma_0^2} \Theta_{ML} +\frac{1/\sigma_0^2}{N/\sigma^2 + 1/\sigma_0^2} \mu_0 = \Theta_{MAP},
$$
where $\Theta_{ML} = \frac{\sum_{t=1}^N \mathbf{x}^{(t)}}{N}.$ <br>
Finally we want to prove the formula $\phi(x, \mu_1, \sigma_1^2) \phi(x, \mu_2, \sigma_2^2) = \phi(\mu_1, \mu_2, \sigma_1^2 + \sigma_2^2) \phi(x, \mu_i, \sigma_i^2)$:<br><br>
$$
\begin{align}
\phi(x, \mu_1, \sigma_1^2) \phi(x, \mu_2, \sigma_2^2) & = \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left( -\frac{(x - \mu_1)^2}{2\sigma_1^2} \right) \frac{1}{\sqrt{2\pi}\sigma_2} \exp\left( -\frac{(x - \mu_2)^2}{2\sigma_2^2} \right) \
& = \frac{1}{2\pi \sigma_1 \sigma_2} \exp\left( -\frac{(x - \mu_1)^2}{2\sigma_1^2} - \frac{(x - \mu_2)^2}{2\sigma_2^2}\right) \
& = \frac{1}{2\pi \sigma_1 \sigma_2} \exp\left( -\frac{(\sigma_1^2+\sigma_2^2) x^2 -2(\mu_1 \sigma_2^2 +\mu_2 \sigma_1^2)x +(\mu_1^2 \sigma_2^2+\mu_2^2 \sigma_1^2)}{2\sigma_1^2 \sigma_2^2} \right)\
& = \frac{1}{2\pi \sigma_1 \sigma_2} \exp\left( -\frac{x^2 -2\frac{\mu_1 \sigma_2^2 +\mu_2 \sigma_1^2}{\sigma_1^2+\sigma_2^2}x + \frac{\mu_1^2 \sigma_2^2+\mu_2^2 \sigma_1^2}{\sigma_1^2+\sigma_2^2}}{2\frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2+\sigma_2^2}} \right) \
& = \frac{1}{2\pi \sigma_1 \sigma_2} \exp\left( -\frac{x^2 -2\frac{\mu_1 \sigma_2^2 +\mu_2 \sigma_1^2}{\sigma_1^2+\sigma_2^2}x + \frac{\mu_1^2 \sigma_2^4+\mu_2^2 \sigma_1^4 + 2\mu_1 \sigma_2^2 \mu_2 \sigma_1^2}{(\sigma_1^2+\sigma_2^2)^2}}{2\frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2+\sigma_2^2}} \right) \
& \times \exp\left(\frac{ - (\sigma_1^2+\sigma_2^2)(\mu_1^2 \sigma_2^2+\mu_2^2 \sigma_1^2) + (\mu_1^2 \sigma_2^4+\mu_2^2 \sigma_1^4 + 2\mu_1 \sigma_2^2 \mu_2 \sigma_1^2) }{2 \sigma_1^2 \sigma_2^2 (\sigma_1^2+\sigma_2^2)}\right) \
& = \frac{1}{2\pi \sqrt{\frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2+\sigma_2^2}} \sqrt{\sigma_1^2+\sigma_2^2}} \exp\left( -\frac{(x -\frac{\mu_1 \sigma_2^2 +\mu_2 \sigma_1^2}{\sigma_1^2+\sigma_2^2})^2}{2\frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2+\sigma_2^2}} \right) \exp\left( -\frac{(\mu_1 - \mu_2)^2}{2 \sigma_1^2 + \sigma_2^2} \right)\
& = \frac{1}{\sqrt{2\pi} \sqrt{\frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2+\sigma_2^2}}} \exp\left( -\frac{(x -\frac{\mu_1 \sigma_2^2 +\mu_2 \sigma_1^2}{\sigma_1^2+\sigma_2^2})^2}{2\frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2+\sigma_2^2}} \right) \frac{1}{\sqrt{2\pi} \sqrt{\sigma_1^2+\sigma_2^2}} \exp\left( -\frac{(\mu_1 - \mu_2)^2}{2 \sigma_1^2 + \sigma_2^2} \right)\
& = \phi(x, \mu_i, \sigma_i^2) \phi(\mu_1, \mu_2, \sigma_1^2 + \sigma_2^2)\
\end{align}
$$
where $$\mu_i = \frac{1 / \sigma_1^2}{1/ \sigma_1^2 + 1/ \sigma_2^2}\mu_1 + \frac{1 / \sigma_2^2}{1/ \sigma_1^2 + 1/ \sigma_2^2}\mu_2 \text{ and } \sigma_i^2 = \frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2 + \sigma_2^2}.$$<br>
Hence we complete the proof and we validate that
$$\Theta_{Bayes} = \mu_\text{new} = \frac{N/\sigma^2}{N/\sigma^2 + 1/\sigma_0^2} \Theta_{ML} +\frac{1/\sigma_0^2}{N/\sigma^2 + 1/\sigma_0^2} \mu_0.$$
Hand-written digit classification (40 points)
In the textbook sample code we applied different scikit-learn classifers for the Iris data set.
In this exercise, we will apply the same set of classifiers over a different data set: hand-written digits.
Please write down the code for different classifiers, choose their hyper-parameters, and compare their performance via the accuracy score as in the Iris dataset.
Which classifier(s) perform(s) the best and worst, and why?
The classifiers include:
* perceptron
* logistic regression
* SVM
* decision tree
* random forest
* KNN
* naive Bayes
The dataset is available as part of scikit learn, as follows.
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data # training data
y = digits.target # training label
print(X.shape)
print(y.shape)
"""
Explanation: Load data
End of explanation
"""
import matplotlib.pyplot as plt
import pylab as pl
num_rows = 4
num_cols = 5
fig, ax = plt.subplots(nrows=num_rows, ncols=num_cols, sharex=True, sharey=True)
ax = ax.flatten()
for index in range(num_rows*num_cols):
img = digits.images[index]
label = digits.target[index]
ax[index].imshow(img, cmap='Greys', interpolation='nearest')
ax[index].set_title('digit ' + str(label))
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
plt.show()
"""
Explanation: Visualize data
End of explanation
"""
from sklearn.preprocessing import StandardScaler
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
print ('scikit-learn version: ' + str(Version(sklearn_version)))
# 1. Standardize features by removing the mean and scaling to unit variance
X_std = StandardScaler().fit_transform(X) # fit_transform(X) will fit to data, then transform it.
print ('1. Complete removing the mean and scaling to unit variance.')
# 2. splitting data into 70% training and 30% test data:
split_ratio = 0.3
X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=split_ratio, random_state=0)
print('2. Complete splitting with ' + str(y_train.shape[0]) + \
'(' + str(int((1-split_ratio)*100)) +'%) training data and ' + \
str(y_test.shape[0]) + '(' + str(int(split_ratio*100)) +'%) test data.')
"""
Explanation: Date Preprocessing
Hint: How you divide training and test data set? And apply other techinques we have learned if needed.
You could take a look at the Iris data set case in the textbook.
End of explanation
"""
from sklearn.linear_model import Perceptron
from sklearn.metrics import accuracy_score
# Training
ppn = Perceptron(n_iter=800, eta0=0.1, random_state=0)
ppn.fit(X_train, y_train)
# Testing
y_pred = ppn.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifier #1 Perceptron
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# Training
lr = LogisticRegression(C=1.0, random_state=0) # we observe that changing C from 0.0001 to 1000 has ignorable effect
lr.fit(X_train, y_train)
# Testing
y_pred = lr.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifier #2 Logistic Regression
End of explanation
"""
from sklearn.svm import SVC
# 1. Using linear kernel
# Training
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train, y_train)
# Testing
y_pred = svm.predict(X_test)
# Results
print('1. Using linear kernel:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
# 2. Using rbf kernel
# Training
svm = SVC(kernel='rbf', C=1.0, random_state=0)
svm.fit(X_train, y_train)
# Testing
y_pred = svm.predict(X_test)
# Results
print('2. Using rbf kernel:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifier #3 SVM
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
# 1. Using entropy criterion
# Training
tree = DecisionTreeClassifier(criterion='entropy', random_state=0)
tree.fit(X_train, y_train)
# Testing
y_pred = tree.predict(X_test)
# Results
print('1. Using entropy criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
# 2. Using Gini criterion
# Training
tree = DecisionTreeClassifier(criterion='gini', random_state=0)
tree.fit(X_train, y_train)
# Testing
y_pred = tree.predict(X_test)
# Results
print('2. Using Gini criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifier #4 Decision Tree
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
# 1. Using entropy criterion
# Training
forest = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2)
forest.fit(X_train, y_train)
# Testing
y_pred = forest.predict(X_test)
# Results
print('1. Using entropy criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
# 2. Using Gini criterion
# Training
forest = RandomForestClassifier(criterion='gini', n_estimators=10, random_state=1, n_jobs=2)
forest.fit(X_train, y_train)
# Testing
y_pred = forest.predict(X_test)
# Results
print('2. Using Gini criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifer #5 Random Forest
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
# Training
knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
knn.fit(X_train, y_train)
# Testing
y_pred = knn.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifier #6 KNN
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
# Training
gnb = GaussianNB()
gnb.fit(X_train, y_train)
# Testing
y_pred = gnb.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
"""
Explanation: Classifier #7 Naive Bayes
End of explanation
"""
|
great-expectations/great_expectations | tests/test_fixtures/upgrade_helper/great_expectations_v20_project_with_v30_configuration_and_v20_checkpoints/notebooks/pandas/validation_playground.ipynb | apache-2.0 | import json
import great_expectations as ge
import great_expectations.jupyter_ux
from great_expectations.datasource.types import BatchKwargs
import datetime
"""
Explanation: Validation Playground
Watch a short tutorial video or read the written tutorial
This notebook assumes that you created at least one expectation suite in your project.
Here you will learn how to validate data loaded into a Pandas DataFrame against an expectation suite.
We'd love it if you reach out for help on the Great Expectations Slack Channel
End of explanation
"""
context = ge.data_context.DataContext()
"""
Explanation: 1. Get a DataContext
This represents your project that you just created using great_expectations init.
End of explanation
"""
context.list_expectation_suite_names()
expectation_suite_name = # TODO: set to a name from the list above
"""
Explanation: 2. Choose an Expectation Suite
List expectation suites that you created in your project
End of explanation
"""
# list datasources of the type PandasDatasource in your project
[datasource['name'] for datasource in context.list_datasources() if datasource['class_name'] == 'PandasDatasource']
datasource_name = # TODO: set to a datasource name from above
# If you would like to validate a file on a filesystem:
batch_kwargs = {'path': "YOUR_FILE_PATH", 'datasource': datasource_name}
# If you already loaded the data into a Pandas Data Frame:
batch_kwargs = {'dataset': "YOUR_DATAFRAME", 'datasource': datasource_name}
batch = context.get_batch(batch_kwargs, expectation_suite_name)
batch.head()
"""
Explanation: 3. Load a batch of data you want to validate
To learn more about get_batch, see this tutorial
End of explanation
"""
# This is an example of invoking a validation operator that is configured by default in the great_expectations.yml file
"""
Create a run_id. The run_id must be of type RunIdentifier, with optional run_name and run_time instantiation
arguments (or a dictionary with these keys). The run_name can be any string (this could come from your pipeline
runner, e.g. Airflow run id). The run_time can be either a dateutil parsable string or a datetime object.
Note - any provided datetime will be assumed to be a UTC time. If no instantiation arguments are given, run_name will
be None and run_time will default to the current UTC datetime.
"""
run_id = {
"run_name": "some_string_that_uniquely_identifies_this_run", # insert your own run_name here
"run_time": datetime.datetime.now(datetime.timezone.utc)
}
results = context.run_validation_operator(
"action_list_operator",
assets_to_validate=[batch],
run_id=run_id)
"""
Explanation: 4. Validate the batch with Validation Operators
Validation Operators provide a convenient way to bundle the validation of
multiple expectation suites and the actions that should be taken after validation.
When deploying Great Expectations in a real data pipeline, you will typically discover these needs:
validating a group of batches that are logically related
validating a batch against several expectation suites such as using a tiered pattern like warning and failure
doing something with the validation results (e.g., saving them for a later review, sending notifications in case of failures, etc.).
Read more about Validation Operators in the tutorial
End of explanation
"""
context.open_data_docs()
"""
Explanation: 5. View the Validation Results in Data Docs
Let's now build and look at your Data Docs. These will now include an data quality report built from the ValidationResults you just created that helps you communicate about your data with both machines and humans.
Read more about Data Docs in the tutorial
End of explanation
"""
|
whitead/numerical_stats | unit_10/hw_2016/homework_9_key.ipynb | gpl-3.0 | from math import erf, sqrt
import numpy as np
import scipy.stats
"""
Explanation: Homework 9 Key
CHE 116: Numerical Methods and Statistics
Prof. Andrew White
Version 1.0 (3/23/2016)
End of explanation
"""
mu_sample=1070
mu_popul=1064.
st_dev=5
z=(-mu_popul+mu_sample)/st_dev
print('Z:', z)
p=(1 - np.abs((scipy.stats.norm.cdf(z)-scipy.stats.norm.cdf(-z))))
print('P-Value:', p)
"""
Explanation: General Instructions
For full credit, you must have the following items for each problem:
[1 point] Describe what and why the method you're using is applicable. For example, 'I chose the signed rank test because these are two matched datasets describing one measurement'
[1 point] Write out the null hypothesis. For example, 'The null hypothesis is that the two measurements sets came from the same population (synonymous with probability distribution)'
[1 point] Report the p-value and your alpha value
[1 point] if you accept/reject the null hypothesis and answer the question
1. $zM$ Tests (8 Points)
You have a sample of an unknown metal with a melting point of $1,070^\circ{}$ C. You know that gold has a melting point of $1,064^\circ{}$ C and your measurements have a standard deviation of $5^\circ{}$ C. Is the unknown metal likely to be gold?
Recall from confidence intervals, that the standard deviation in distance from the true mean is $\sigma / \sqrt{N}$ when you know the true standard deviation, $\sigma$. You take three additional samples and get $1,071^\circ{}$ C, $1,067^\circ{}$ C, and $1,075^\circ{}$ C. Does your evidence for gold change? USe the original measurement as well.
1.1 Answer
$zM$ test is chosen because we have one sample compared with a parent group whose mean and standard deviation is known.
The null hypothesis: The sample is gold
End of explanation
"""
mu = 1064.
sigma = 5.
data = [1070, 1071, 1067, 1075]
Z = (mu - np.mean(data)) / (sigma / sqrt(len(data)))
print('Z:', Z)
p = 1 - (scipy.stats.norm.cdf(abs(Z)) - scipy.stats.norm.cdf(-abs(Z)))
print('P-Value:', p)
"""
Explanation: The $p$-value is 0.23
We do not reject the null hypothesis, so the sample could be gold
1.2 Answer
$zM$ test is chosen because we have a sample compared with a parent group whose mean and standard deviation is known.
The null hypothesis: The sample is gold
The formula for a $Z$-statistic with a sample size greater than 1 is:
$$ Z = \frac{\mu - \bar{x}}{\sigma / \sqrt{N}}$$
End of explanation
"""
mu = 89.3
data = [112.7, 78, 59.9, 127]
T = (mu - np.mean(data)) / np.sqrt(np.var(data, ddof=1) / len(data))
T = np.abs(T)
print('T:', T)
p = 1 - (scipy.stats.t.cdf(T, len(data)) - scipy.stats.t.cdf(-T, len(data)))
print('p-value:', p)
"""
Explanation: The $p$-value is 0.006
We do reject the null hypothesis, so the sample is not gold. Different than last time
2. $t$-Tests (4 Points)
The median snowfall in Rochester is 89.3. The last four snowfalls have been 112.7, 78, 59.9 and 127. Are these snowfalls abnormal?
Repeat problem 1.2 without knowing the standard deviation
2.1 Answer
$t$-test is chosen because we have a sample compared with a parent group whose mean is known but not standard deviation.
The null hypothesis: The snowfall is about the same as usual
End of explanation
"""
mu = 1064
data = [1070, 1071, 1067, 1075]
T = (mu - np.mean(data)) / np.sqrt(np.var(data, ddof=1) / len(data))
T = np.abs(T)
print('T:', T)
p = 1 - (scipy.stats.t.cdf(T, len(data)) - scipy.stats.t.cdf(-T, len(data)))
print('p-value:', p)
"""
Explanation: The $p$-value is 0.76
We do not reject the null hypothesis, so the snowfall is the usual
2.2 Answer
$t$-test because we're comparing a single sample with a parent group whose standard deviation is unknown
The null hypothesis: the samples are gold
End of explanation
"""
data_1 = [3.05, 3.01, 3.20, 3.16, 3.11, 3.09]
data_2 = [3.18, 3.23, 3.19, 3.28, 3.08, 3.18]
"""
Explanation: The $p$-value is 0.015
We still reject the null hypothesis
3. Wilcoxon's Sum of Rank Test (4 Points)
You are comparing the GPAs of students who take a new Freshman preparedness course and those who do not. Their GPAs are given below. Does the course help the students?
End of explanation
"""
_,p = scipy.stats.ranksums(data_1, data_2)
print(p)
"""
Explanation: 3.1 Answer
The Wilcoxon sum of ranks test is chosen because we are comparing two unpaired sample groups
The null hypothesis: The two sample groups came from the same parent distribution.
End of explanation
"""
data_empty_tummy = [17.1, 29.5, 23.8, 37.3, 19.6, 24.2, 30.0, 20.9]
data_garbage_tummy = [14.2, 30.3, 21.5, 36.3, 19.6, 24.5, 26.7, 20.6]
"""
Explanation: The $p$-value is 0.08
We do not reject the null hypothesis, so the course has no effect
4. Wilcoxon's Signed Rank Test (4 Points)
You calculate how long it takes someone to run two miles before and after they've eaten a garbage plate. Does eating a garbage plate influence your ability to run?
End of explanation
"""
_,p = scipy.stats.wilcoxon(data_empty_tummy, data_garbage_tummy)
print('p-value:', p)
"""
Explanation: 4.1 Answer
Wilcoxon signed rank is chosen because we have two paired sample groups that we're comparing
The null hypothesis: The two sample groups are from the same parent distribution (no change)
End of explanation
"""
temperature = [15, 18, 21, 24, 27, 30, 33]
chem_yield = [66, 69, 69, 70, 64, 73, 75]
"""
Explanation: The $p$-value is 0.13
We do not reject the null hypothesis, so there appears to be no difference.
5. Spearman's Correlation Test (4 Points)
We've performed a chemical reaction at different temperatures and would like to see if there is a relationship with temperature and yield. Is there one?
End of explanation
"""
scipy.stats.spearmanr(temperature, chem_yield)
"""
Explanation: 5.1 Answer
Spearman's Correlation coefficient because we've measured two different things for one sample group
Null hypothesis: there is no correlation
End of explanation
"""
p_winning = 1 / 10**7
expected = p_winning * 10**6
p = 2 * (1 - scipy.stats.poisson.cdf(3, mu=expected))
print(p)
"""
Explanation: the $p$-value is 0.13
There is barely not enough evidence to reject the null hypothesis. No correlation
6.Poisson Test (4 Points)
Some speculate that the lottery is an elaborate trap for time-travelers. We set-up a lottery where the odds of winning are one in 10 million. If one million people play and we get 3 winners, should we be suspicious of the number of winners?
6.1 Answer
Poisson's test is chosen, because we're comparing a count to a known parent distribution
Null hypothesis: The count is from the known parent distribution
End of explanation
"""
p = 0.5
N = 25
n = 8
print(2 *scipy.stats.binom.cdf(n, N, p))
"""
Explanation: $p$-vaule is 0.000008
The null hypothesis is rejected, we should arrest the winners for time travel
7. Binomial Test (4 Points)
You're wondering if you have a fair coin or not. You've flipped it 25 times and gotten heads 8 times. Is there evidence that the coin is unfair?
7.1 Answer
A binomial test is appropriate because we're comparing a sample from a known distribution where the number of trials is fixed and the probability of an outcome is constant
The null hypothesis is that the outcome of the experiment came from the known binomial distribution
End of explanation
"""
|
jorisvandenbossche/DS-python-data-analysis | _solved/case2_observations_processing.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
"""
Explanation: <p><font size="6"><b> CASE - Observation data - data cleaning and enrichment</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
"""
survey_data = pd.read_csv("data/surveys.csv")
survey_data.head()
"""
Explanation: Scenario:<br>
Observation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available.
You decide to share data of a field campaign, but the data set still requires some cleaning and standardization. For example, the coordinates, can be named x/y, decimalLatitude/decimalLongitude, lat/long... Luckily, you know of an international open data standard to describe occurrence/observation data, i.e. Darwin Core (DwC). Instead of inventing your own data model, you decide to comply to this international standard. The latter will enhance communication and will also make your data compliant with GBIF.
In short, the DwC describes a flat table (cfr. CSV) with an agreed name convention on the header names and conventions on how certain data types need to be represented (as a reference, an in depth description is given here). For this tutorial, we will focus on a few of the existing terms to learn some elements about data cleaning:
* eventDate: ISO 6801 format of dates
* scientificName: the accepted scientific name of the species
* decimalLatitude/decimalLongitude: coordinates of the occurrence in WGS84 format
* sex: either male or female to characterize the sex of the occurrence
* occurrenceID: an identifier within the data set to identify the individual records
* datasetName: a static string defining the source of the data
Furthermore, additional information concerning the taxonomy will be added using an external API service
Dataset to work on:
For this data set, the data is split up in the following main data files:
* surveys.csv the data with the surveys in the individual plots
* species.csv the overview list of the species short-names
* plot_location.xlsx the overview of coordinates of the individual locations
The data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona.
1. Survey-data
Reading in the data of the individual surveys:
End of explanation
"""
len(survey_data)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- How many individual records (occurrences) does the survey data set contain?
</div>
End of explanation
"""
datasetname = "Ecological Archives E090-118-D1."
"""
Explanation: Adding the data source information as static column
For convenience when this data-set will be combined with other datasets, we first add a column of static values, defining the datasetName of this particular data:
End of explanation
"""
survey_data["datasetName"] = datasetname
"""
Explanation: Adding this static value as a new column datasetName:
<div class="alert alert-success">
**EXERCISE**
Add a new column, `datasetName`, to the survey data set with `datasetname` as value for all of the records (static value for the entire data set)
<details><summary>Hints</summary>
- When a column does not exist, a new `df["a_new_column"]` can be created by assigning a value to it.
- No `for`-loop is required, as Pandas will automatically broadcast a single string value to each of the rows in the `DataFrame`.
</details>
</div>
End of explanation
"""
survey_data["sex_char"].unique().tolist()
"""
Explanation: Cleaning the sex_char column into a DwC called sex column
<div class="alert alert-success">
**EXERCISE**
- Get a list of the unique values for the column `sex_char`.
<details><summary>Hints</summary>
- To find the unique values, look for a function called `unique` (remember `SHIFT`+`TAB` combination to explore the available methods/attributes?)
</details>
</div>
End of explanation
"""
survey_data = survey_data.rename(columns={'sex_char': 'verbatimSex'})
"""
Explanation: So, apparently, more information is provided in this column, whereas according to the metadata information, the sex information should be either M (male) or F (female). We will create a column, named sex and convert the symbols to the corresponding sex, taking into account the following mapping of the values (see metadata for more details):
* M -> male
* F -> female
* R -> male
* P -> female
* Z -> nan
At the same time, we will save the original information of the sex_char in a separate column, called verbatimSex, as a reference in case we need the original data later.
In summary, we have to:
* rename the sex_char column to verbatimSex
* create a new column with the name sex
* map the original values of the sex_char to the values male and female according to the mapping above
First, let's convert the name of the column header sex_char to verbatimSex with the rename function:
End of explanation
"""
sex_dict = {"M": "male",
"F": "female",
"R": "male",
"P": "female",
"Z": np.nan}
survey_data['sex'] = survey_data['verbatimSex'].replace(sex_dict)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Express the mapping of the values (e.g. `M` -> `male`) into a Python dictionary object with the variable name `sex_dict`. `Z` values correspond to _Not a Number_, which can be defined as `np.nan`.
- Use the `sex_dict` dictionary to replace the values in the `verbatimSex` column to the new values and save the mapped values in a new column 'sex' of the DataFrame.
<details><summary>Hints</summary>
- A dictionary is a Python standard library data structure, see https://docs.python.org/3/tutorial/datastructures.html#dictionaries - no Pandas magic involved when you need a key/value mapping.
- When you need to replace values, look for the Pandas method `replace`.
</details>
</div>
End of explanation
"""
survey_data["sex"].unique()
"""
Explanation: Checking the current frequency of values of the resulting sex column (this should result in the values male, female and nan):
End of explanation
"""
survey_data["sex"].value_counts(dropna=False).plot(kind="barh", color="#00007f")
"""
Explanation: To check what the frequency of occurrences is for male/female of the categories, a bar chart is a possible representation:
<div class="alert alert-success">
**EXERCISE**
- Make a horizontal bar chart comparing the number of male, female and unknown (`NaN`) records in the data set.
<details><summary>Hints</summary>
- Pandas provides a shortcut method `value_counts` which works on Pandas `Series` to count unique values. Explore the documentation of the `value_counts` method to include the `NaN` values as well.
- Check in the help of the Pandas plot function for the `kind` parameter.
</details>
</div>
End of explanation
"""
survey_data["species"].unique()
survey_data.head(10)
"""
Explanation: <div class="alert alert-warning">
<b>NOTE</b>: The usage of `groupby` combined with the `size` of each group would be an option as well. However, the latter does not support to count the `NaN` values as well. The `value_counts` method does support this with the `dropna=False` argument.
</div>
Solving double entry field by decoupling
When checking the species unique information:
End of explanation
"""
example = survey_data.loc[7:10, "species"]
example
"""
Explanation: There apparently exists a double entry: 'DM and SH', which basically defines two records and should be decoupled to two individual records (i.e. rows). Hence, we should be able to create an additional row based on this split. To do so, Pandas provides a dedicated function since version 0.25, called explode. Starting from a small subset example:
End of explanation
"""
example.str.split("and")
"""
Explanation: Using the split method on strings, we can split the string using a given character, in this case the word and:
End of explanation
"""
example_split = example.str.split("and").explode()
example_split
"""
Explanation: The explode method will create a row for each element in the list:
End of explanation
"""
example_split.iloc[1], example_split.iloc[2]
"""
Explanation: Hence, the DM and SH are now enlisted in separate rows. Other rows remain unchanged. The only remaining issue is the spaces around the characters:
End of explanation
"""
example_split.str.strip()
"""
Explanation: Which we can solve again using the string method strip, removing the spaces before and after the characters:
End of explanation
"""
def solve_double_field_entry(df, keyword="and", column="verbatimEventDate"):
"""Split on keyword in column for an enumeration and create extra record
Parameters
----------
df: pd.DataFrame
DataFrame with a double field entry in one or more values
keyword: str
word/character to split the double records on
column: str
column name to use for the decoupling of the records
"""
df = df.copy() # copy the input DataFrame to avoid editing the original
df[column] = df[column].str.split(keyword)
df = df.explode(column)
df[column] = df[column].str.strip() # remove white space around the words
return df
"""
Explanation: To make this reusable, let's create a dedicated function to combine these steps, called solve_double_field_entry:
End of explanation
"""
survey_data_decoupled = solve_double_field_entry(survey_data,
"and",
column="species") # get help of the function by SHIFT + TAB
survey_data_decoupled["species"].unique()
survey_data_decoupled.head(11)
"""
Explanation: The function takes a DataFrame as input, splits the record into separate rows and returns an updated DataFrame. We can use this function to get an update of the DataFrame, with an additional row (observation) added by decoupling the specific field. Let's apply this new function.
<div class="alert alert-success">
**EXERCISE**
- Use the function `solve_double_field_entry` to update the `survey_data` by decoupling the double entries. Save the result as a variable `survey_data_decoupled`.
<details><summary>Hints</summary>
- As we added a 'docstring' to the function, we can check our own documentation to know how to use the function and which inputs we should provide. You can use `SHIFT` + `TAB` to explore the documentation just like any other function.
</details>
</div>
End of explanation
"""
np.arange(1, len(survey_data_decoupled) + 1, 1)
"""
Explanation: Create new occurrence identifier
The record_id is no longer a unique identifier for each observation after the decoupling of this data set. We will make a new data set specific identifier, by adding a column called occurrenceID that takes a new counter as identifier. As a simple and straightforward approach, we will use a new counter for the whole dataset, starting with 1:
End of explanation
"""
survey_data_decoupled["occurrenceID"] = np.arange(1, len(survey_data_decoupled) + 1, 1)
"""
Explanation: To create a new column with header occurrenceID with the values 1 -> 35550 as field values:
End of explanation
"""
survey_data_decoupled = survey_data_decoupled.drop(columns="record_id")
"""
Explanation: To overcome the confusion on having both a record_id and occurrenceID field, we will remove the record_id term:
End of explanation
"""
survey_data_decoupled.head(10)
"""
Explanation: Hence, columns can be drop-ped out of a DataFrame
End of explanation
"""
# pd.to_datetime(survey_data_decoupled[["year", "month", "day"]]) # uncomment the line and test this statement
"""
Explanation: Converting the date values
In the survey data set we received, the month, day, and year columns are containing the information about the date, i.e. eventDate in DarwinCore terms. We want this data in a ISO format YYYY-MM-DD. A convenient Pandas function is the usage of to_datetime, which provides multiple options to interpret dates. One of the options is the automatic interpretation of some 'typical' columns, like year, month and day, when passing a DataFrame.
End of explanation
"""
sum(pd.to_datetime(survey_data_decoupled[["year", "month", "day"]], errors='coerce').isna())
"""
Explanation: This is not working, not all dates can be interpreted... We should get some more information on the reason of the errors. By using the option coerce, the problem makers will be labeled as a missing value NaT. We can count the number of dates that can not be interpreted:
End of explanation
"""
mask = pd.to_datetime(survey_data_decoupled[["year", "month", "day"]], errors='coerce').isna()
trouble_makers = survey_data_decoupled[mask]
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Make a selection of `survey_data_decoupled` containing those records that can not correctly be interpreted as date values and save the resulting `DataFrame` as a new variable `trouble_makers`
<details><summary>Hints</summary>
- The result of the `.isna()` method is a `Series` of boolean values, which can be used to make a selection (so called boolean indexing or filtering)
</details>
</div>
End of explanation
"""
trouble_makers.head()
trouble_makers["day"].unique()
trouble_makers["month"].unique()
trouble_makers["year"].unique()
"""
Explanation: Checking some charactersitics of the trouble_makers:
End of explanation
"""
mask = pd.to_datetime(survey_data_decoupled[["year", "month", "day"]], errors='coerce').isna()
survey_data_decoupled.loc[mask, "day"] = 30
"""
Explanation: The issue is the presence of day 31 during the months April and September of the year 2000. At this moment, we would have to recheck the original data in order to know how the issue could be solved. Apparently, - for this specific case - there has been a data-entry problem in 2000, making the 31 days during this period should actually be 30. It would be optimal to correct this in the source data set, but for the exercise, we will correct it here.
<div class="alert alert-success">
**EXERCISE**
- Assign in the `DataFrame` `survey_data_decoupled` all of the troublemakers `day` values the value 30 instead of 31.
<details><summary>Hints</summary>
- No `for`-loop is required, but use the same boolean mask to assign the new value to the correct rows.
- Check `pandas_03b_indexing.ipynb` for the usage of `loc` and `iloc` to assign new values.
- With `loc`, specify both the selecting for the rows and for the columns (`df.loc[row_indexer, column_indexer] = ..`).
</details>
</div>
End of explanation
"""
survey_data_decoupled["eventDate"] = \
pd.to_datetime(survey_data_decoupled[["year", "month", "day"]])
"""
Explanation: Now, we do the parsing again to create a proper eventDate field, containing the dates:
End of explanation
"""
(survey_data_decoupled["year"]
.value_counts(sort=False)
.sort_index()
.plot(kind='barh', color="#00007f", figsize=(10, 10)));
(survey_data_decoupled
.groupby("year")
.size()
.plot(kind='barh', color="#00007f", figsize=(10, 10)))
survey_data_decoupled.head()
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Check the number of observations for each year. Create a horizontal bar chart with the number of rows/observations for each year.
<details><summary>Hints</summary>
- To get the total number of observations, both the usage of `value_counts` as using `groupby` + `size` will work. `value_counts` is a convenient function when all you need to do is counting rows.
- When using `value_counts`, the years in the index will no longer be in ascending order. You can chain methods and include a `sort_index()` method to sort these again.
</details>
</div>
End of explanation
"""
survey_data_decoupled["eventDate"].dtype
"""
Explanation: Currently, the dates are stored in a python specific date format:
End of explanation
"""
survey_data_decoupled.eventDate.dt #add a dot (.) and press TAB to explore the date options it provides
"""
Explanation: This is great, because it allows for many functionalities using the .dt accessor:
End of explanation
"""
(survey_data_decoupled
.groupby(survey_data_decoupled["eventDate"].dt.year)
.size()
.plot(kind='barh', color="#00007f", figsize=(10, 10)))
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Create a horizontal bar chart with the number of records for each year (cfr. supra), but without using the column `year`, using the `eventDate` column directly.
<details><summary>Hints</summary>
- Check the `groupby` + `size` solution of the previous exercise and use this to start with. Replace the `year` inside the `groupby` method...
</details>
</div>
End of explanation
"""
nrecords_by_dayofweek = survey_data_decoupled["eventDate"].dt.dayofweek.value_counts().sort_index()
fig, ax = plt.subplots(figsize=(6, 6))
nrecords_by_dayofweek.plot(kind="barh", color="#00007f", ax=ax);
# If you want to represent the ticklabels as proper names, uncomment the following line:
# ax.set_yticklabels(["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]);
# Python standard library has a lot of useful functionalities! So why not use them?
#import calendar
#ax.set_yticklabels(calendar.day_name);
"""
Explanation: We actually do not need the day, month, year columns anymore, but feel free to use what suits you best.
<div class="alert alert-success">
**EXERCISE**
- Create a bar chart with the number of records for each day of the week (`dayofweek`)
<details><summary>Hints</summary>
- Pandas has an accessor for `dayofweek` as well.
- You can specify the days of the week yourself to improve the plot, or use the Python standard library `calendar.day_name` (import the calendar module first) to get the names.
</details>
</div>
End of explanation
"""
survey_data_decoupled["eventDate"] = survey_data_decoupled["eventDate"].dt.strftime('%Y-%m-%d')
survey_data_decoupled["eventDate"].head()
"""
Explanation: When saving the information to a file (e.g. CSV-file), this data type will be automatically converted to a string representation. However, we could also decide to explicitly provide the string format the dates are stored (losing the date type functionalities), in order to have full control on the way these dates are formatted:
End of explanation
"""
survey_data_decoupled = survey_data_decoupled.drop(columns=["day", "month", "year"])
"""
Explanation: For the remainder, let's remove the day/year/month columns.
End of explanation
"""
species_data = pd.read_csv("data/species.csv", sep=";")
species_data.head()
"""
Explanation: 2. Add species names to dataset
The column species only provides a short identifier in the survey overview. The name information is stored in a separate file species.csv. We want our data set to include this information, read in the data and add it to our survey data set:
<div class="alert alert-success">
**EXERCISE**
- Read in the 'species.csv' file and save the resulting `DataFrame` as variable `species_data`.
<details><summary>Hints</summary>
- Check the delimiter (`sep`) parameter of the `read_csv` function.
</details>
</div>
End of explanation
"""
species_data.loc[species_data["species_id"] == "NE", "species_id"] = "NA"
# note: for such a 1:1 replacement, we could also use the "replace()" method
# species_data["species_id"] = species_data["species_id"].replace({"NE": "NA"})
"""
Explanation: Fix a wrong acronym naming
When reviewing the metadata, you see that in the data-file the acronym NE is used to describe Neotoma albigula, whereas in the metadata description, the acronym NA is used.
<div class="alert alert-success">
**EXERCISE**
- Convert the value of 'NE' to 'NA' by using Boolean indexing/Filtering for the `species_id` column.
<details><summary>Hints</summary>
- To assign a new value, use the `loc` operator.
- With `loc`, specify both the selecting for the rows and for the columns (`df.loc[row_indexer, column_indexer] = ..`).
</details>
</div>
End of explanation
"""
survey_data_species = pd.merge(survey_data_decoupled, species_data, how="left", # LEFT OR INNER?
left_on="species", right_on="species_id")
len(survey_data_species) # check length after join operation
"""
Explanation: Merging surveys and species
As we now prepared the two series, we can combine the data, using again the pd.merge operation.
We want to add the data of the species to the survey data, in order to see the full species names in the combined data table.
<div class="alert alert-success">
**EXERCISE**
Combine the DataFrames `survey_data_decoupled` and `species_data` by adding the corresponding species information (name, class, kingdom,..) to the individual observations. Assign the output to a new variable `survey_data_species`.
<details><summary>Hints</summary>
- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.
- Take into account that our key-column is different for `species_data` and `survey_data_decoupled`, respectively `species` and `species_id`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on.
</details>
End of explanation
"""
survey_data_species.head()
"""
Explanation: The join is ok, but we are left with some redundant columns and wrong naming:
End of explanation
"""
survey_data_species = survey_data_species.drop(["species_x", "species_id"], axis=1)
"""
Explanation: We do not need the columns species_x and species_id column anymore, as we will use the scientific names from now on:
End of explanation
"""
survey_data_species = survey_data_species.rename(columns={"species_y": "species"})
survey_data_species.head()
len(survey_data_species)
"""
Explanation: The column species_y could just be named species:
End of explanation
"""
plot_data = pd.read_excel("data/plot_location.xlsx", skiprows=3, index_col=0)
plot_data.head()
"""
Explanation: 3. Add coordinates from the plot locations
Loading the coordinate data
The individual plots are only identified by a plot identification number. In order to provide sufficient information to external users, additional information about the coordinates should be added. The coordinates of the individual plots are saved in another file: plot_location.xlsx. We will use this information to further enrich our data set and add the Darwin Core Terms decimalLongitude and decimalLatitude.
<div class="alert alert-success">
**EXERCISE**
- Read the excel file 'plot_location.xlsx' and store the data as the variable `plot_data`, with 3 columns: plot, xutm, yutm.
<details><summary>Hints</summary>
- Pandas read methods all have a similar name, `read_...`.
</details>
</div>
End of explanation
"""
from pyproj import Transformer
transformer = Transformer.from_crs("EPSG:32612", "epsg:4326")
"""
Explanation: Transforming to other coordinate reference system
These coordinates are in meters, more specifically in the UTM 12 N coordinate system. However, the agreed coordinate representation for Darwin Core is the World Geodetic System 1984 (WGS84).
As this is not a GIS course, we will shortcut the discussion about different projection systems, but provide an example on how such a conversion from UTM12N to WGS84 can be performed with the projection toolkit pyproj and by relying on the existing EPSG codes (a registry originally setup by the association of oil & gas producers).
First, we define out two projection systems, using their corresponding EPSG codes:
End of explanation
"""
transformer.transform(681222.131658, 3.535262e+06)
"""
Explanation: The reprojection can be done by the function transform of the projection toolkit, providing the coordinate systems and a set of x, y coordinates. For example, for a single coordinate, this can be applied as follows:
End of explanation
"""
def transform_utm_to_wgs(row):
"""Converts the x and y coordinates
Parameters
----------
row : pd.Series
Single DataFrame row
Returns
-------
pd.Series with longitude and latitude
"""
transformer = Transformer.from_crs("EPSG:32612", "epsg:4326")
return pd.Series(transformer.transform(row['xutm'], row['yutm']))
# test the new function on a single row of the DataFrame
transform_utm_to_wgs(plot_data.loc[0])
plot_data.apply(transform_utm_to_wgs, axis=1)
plot_data[["decimalLongitude" ,"decimalLatitude"]] = plot_data.apply(transform_utm_to_wgs, axis=1)
plot_data.head()
"""
Explanation: Such a transformation is a function not supported by Pandas itself (it is in https://geopandas.org/). In such an situation, we want to apply a custom function to each row of the DataFrame. Instead of writing a for loop to do this for each of the coordinates in the list, we can .apply() this function with Pandas.
<div class="alert alert-success">
**EXERCISE**
Apply the pyproj function `transform` to plot_data, using the columns `xutm` and `yutm` and save the resulting output in 2 new columns, called `decimalLongitude` and `decimalLatitude`:
- Create a function `transform_utm_to_wgs` that takes a row of a `DataFrame` and returns a `Series` of two elements with the longitude and latitude.
- Test this function on the first row of `plot_data`
- Now `apply` this function on all rows (use the `axis` parameter correct)
- Assign the result of the previous step to `decimalLongitude` and `decimalLatitude` columns
<details><summary>Hints</summary>
- Convert the output of the transformer to a Series before returning (`pd.Series(....)`)
- A convenient way to select a single row is using the `.loc[0]` operator.
- `apply` can be used for both rows (`axis` 1) as columns (`axis` 0).
- To assign two columns at once, you can use a similar syntax as for selecting multiple columns with a list of column names (`df[['col1', 'col2']]`).
</details>
</div>
End of explanation
"""
plot_data_selection = plot_data[["plot", "decimalLongitude", "decimalLatitude"]]
"""
Explanation: The above function transform_utm_to_wgs you have created is a very specific function that knows the structure of the DataFrame you will apply it to (it assumes the 'xutm' and 'yutm' column names). We could also make a more generic function that just takes a X and Y coordinate and returns the Series of converted coordinates (transform_utm_to_wgs2(X, Y)).
An alternative to apply such a custom function to the plot_data DataFrame is the usage of the lambda construct, which lets you specify a function on one line as an argument:
transformer = Transformer.from_crs("EPSG:32612", "epsg:4326")
plot_data.apply(lambda row : transformer.transform(row['xutm'], row['yutm']), axis=1)
<div class="alert alert-warning">
__WARNING__
Do not abuse the usage of the `apply` method, but always look for an existing Pandas function first as these are - in general - faster!
</div>
Join the coordinate information to the survey data set
We can extend our survey data set with this coordinate information. Making the combination of two data sets based on a common identifier is completely similar to the usage of JOIN operations in databases. In Pandas, this functionality is provided by pd.merge.
In practice, we have to add the columns decimalLongitude/decimalLatitude to the current data set survey_data_species, by using the plot identification number as key to join.
<div class="alert alert-success">
**EXERCISE**
- Extract only the columns to join to our survey dataset: the `plot` identifiers, `decimalLatitude` and `decimalLongitude` into a new variable named `plot_data_selection`
<details><summary>Hints</summary>
- To select multiple columns, use a `list` of column names, e.g. `df[["my_col1", "my_col2"]]`
</details>
</div>
End of explanation
"""
survey_data_plots = pd.merge(survey_data_species, plot_data_selection,
how="left", on="plot")
survey_data_plots.head()
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Combine the DataFrame `plot_data_selection` and the DataFrame `survey_data_species` by adding the corresponding coordinate information to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data_plots`.
<details><summary>Hints</summary>
- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.
- The key-column is the `plot`.
</details>
End of explanation
"""
survey_data_plots = survey_data_plots.rename(columns={'plot': 'verbatimLocality'})
"""
Explanation: The plot locations need to be stored with the variable name verbatimLocality indicating the identifier as integer value of the plot:
End of explanation
"""
survey_data_plots.to_csv("interim_survey_data_species.csv", index=False)
"""
Explanation: Let's now save our clean data to a csv file, so we can further analyze the data in a following notebook:
End of explanation
"""
import requests
"""
Explanation: (OPTIONAL SECTION) 4. Using a API service to match the scientific names
As the current species names are rather short and could eventually lead to confusion when shared with other users, retrieving additional information about the different species in our dataset would be useful to integrate our work with other research. An option is to match our names with an external service to request additional information about the different species.
One of these services is GBIF API. The service can most easily be illustrated with a small example:<br><br>
In a new tab blad of the browser, go to the URL http://www.gbif.org/species/2475532, which corresponds to the page of Alcedo atthis (ijsvogel in dutch). One could for each of the species in the list we have do a search on the website of GBIF to find the corresponding page of the different species, from which more information can be extracted manually. However, this would take a lot of time...
Therefore, GBIF (as many other organizations!) provides a service (or API) to extract the same information in a machine-readable way, in order to automate these searches. As an example, let's search for the information of Alcedo atthis, using the GBIF API: Go to the URL: http://api.gbif.org/v1/species/match?name=Alcedo atthis and check the output. What we did is a machine-based search on the GBIF website for information about Alcedo atthis.
The same can be done using Python. The main library we need to this kind of automated searches is the requests package, which can be used to do request to any kind of API out there.
End of explanation
"""
species_name = 'Alcedo atthis'
base_string = 'http://api.gbif.org/v1/species/match?'
request_parameters = {'verbose': False, 'strict': True, 'name': species_name}
message = requests.get(base_string, params=request_parameters).json()
message
"""
Explanation: Example matching with Alcedo Atthis
For the example of Alcedo atthis:
End of explanation
"""
genus_name = "Callipepla"
species_name = "squamata"
name_to_match = '{} {}'.format(genus_name, species_name)
base_string = 'http://api.gbif.org/v1/species/match?'
request_parameters = {'strict': True, 'name': name_to_match} # use strict matching(!)
message = requests.get(base_string, params=request_parameters).json()
message
"""
Explanation: From which we get a dictionary containing more information about the taxonomy of the Alcedo atthis.
In the species data set available, the name to match is provided in the combination of two columns, so we have to combine those to in order to execute the name matching:
End of explanation
"""
def name_match(genus_name, species_name, strict=True):
"""
Perform a GBIF name matching using the species and genus names
Parameters
----------
genus_name: str
name of the genus of the species
species_name: str
name of the species to request more information
strict: boolean
define if the mathing need to be performed with the strict
option (True) or not (False)
Returns
-------
message: dict
dictionary with the information returned by the GBIF matching service
"""
name = '{} {}'.format(genus_name, species_name)
base_string = 'http://api.gbif.org/v1/species/match?'
request_parameters = {'strict': strict, 'name': name} # use strict matching(!)
message = requests.get(base_string, params=request_parameters).json()
return message
"""
Explanation: To apply this on our species data set, we will have to do this request for each of the individual species/genus combination. As, this is a returning functionality, we will write a small function to do this:
Writing a custom matching function
<div class="alert alert-success">
**EXERCISE**
- Write a function, called `name_match` that takes the `genus`, the `species` and the option to perform a strict matching or not as inputs, performs a matching with the GBIF name matching API and return the received message as a dictionary.
</div>
End of explanation
"""
genus_name = "Callipepla"
species_name = "squamata"
name_match(genus_name, species_name, strict=True)
"""
Explanation: <div class="alert alert-info">
**NOTE**
For many of these API request handling, dedicated packages do exist, e.g. <a href="https://github.com/sckott/pygbif">pygbif</a> provides different functions to do requests to the GBIF API, basically wrapping the request possibilities. For any kind of service, just ask yourself: is the dedicated library providing sufficient additional advantage, or can I easily setup the request myself. (or sometimes: for which the documentation is the best...)<br><br>Many services do exist for a wide range of applications, e.g. scientific name matching, matching of addresses, downloading of data,...
</div>
Testing our custom matching function:
End of explanation
"""
genus_name = "Lizard"
species_name = "sp."
name_match(genus_name, species_name, strict=True)
"""
Explanation: However, the matching won't provide an answer for every search:
End of explanation
"""
#%%timeit
unique_species = survey_data_plots[["genus", "species"]].drop_duplicates().dropna()
len(unique_species)
"""
Explanation: Match each of the species names of the survey data set
Hence, in order to add this information to our survey DataFrame, we need to perform the following steps:
1. extract the unique genus/species combinations in our dataset and combine them in single column
2. match each of these names to the GBIF API service
3. process the returned message:
* if a match is found, add the information of the columns 'class', 'kingdom', 'order', 'phylum', 'scientificName', 'status' and 'usageKey'
* if no match was found: nan-values
4. Join the DataFrame of unique genus/species information with the enriched GBIF info to the survey_data_plots data set
<div class="alert alert-success">
**EXERCISE**
- Extract the unique combinations of genus and species in the `survey_data_plots` using the function `drop_duplicates()`. Save the result as the variable `unique_species` and remove the `NaN` values using `.dropna()`.
</div>
End of explanation
"""
#%%timeit
unique_species = \
survey_data_plots.groupby(["genus", "species"]).first().reset_index()[["genus", "species"]]
len(unique_species)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Extract the unique combinations of genus and species in the `survey_data_plots` using `groupby`. Save the result as the variable `unique_species`.
<details><summary>Hints</summary>
- As `groupby` needs an aggregation function, this can be `first()` (the first of each group) as well.
- Do not forget to `reset_index` after the `groupby`.
</details>
</div>
End of explanation
"""
unique_species["name"] = unique_species["genus"] + " " + unique_species["species"]
# an alternative approach worthwhile to know:
#unique_species["name"] = unique_species["genus"].str.cat(unique_species["species"], " ")
unique_species.head()
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Combine the columns genus and species to a single column with the complete name, save it in a new column named 'name'
</div>
End of explanation
"""
# this will take a bit as we do a request to gbif for each individual species
species_annotated = {}
for key, row in unique_species.iterrows():
species_annotated[key] = name_match(row["genus"], row["species"], strict=True)
#species_annotated # uncomment to see output
"""
Explanation: To perform the matching for each of the combination, different options do exist (remember apply?)
Just to showcase the possibility of using for loops in such a situation, let's do the addition of the matched information with a for loop. First, we will store everything in one dictionary, where the keys of the dictionary are the index values of unique_species (in order to later merge them again) and the values are the entire messages (which are dictionaries on itself). The format will look as following:
species_annotated = {O: {'canonicalName': 'Squamata', 'class': 'Reptilia', 'classKey': 358, ...},
1: {'canonicalName':...},
2:...}
End of explanation
"""
df_species_annotated = pd.DataFrame(species_annotated).transpose()
df_species_annotated.head()
"""
Explanation: We can now transform this to a pandas DataFrame:
<div class="alert alert-success">
**EXERCISE**
- Convert the dictionary `species_annotated` into a pandas DataFrame with the row index the key-values corresponding to `unique_species` and the column headers the output columns of the API response. Save the result as the variable `df_species_annotated`.
<details><summary>Hints</summary>
- The documentation of `pd.DataFrame` says the input van be 'ndarray (structured or homogeneous), Iterable, dict, or DataFrame'.
- `transpose` can be used to flip rows and columns.
</details>
</div>
End of explanation
"""
df_species_annotated_subset = df_species_annotated[['class', 'kingdom', 'order', 'phylum',
'scientificName', 'status', 'usageKey']]
df_species_annotated_subset.head()
"""
Explanation: Select relevant information and add this to the survey data
<div class="alert alert-success">
**EXERCISE**
- Subselect the columns 'class', 'kingdom', 'order', 'phylum', 'scientificName', 'status' and 'usageKey' from the DataFrame `df_species_annotated`. Save it as the variable `df_species_annotated_subset`
</div>
End of explanation
"""
unique_species_annotated = pd.merge(unique_species, df_species_annotated_subset,
left_index=True, right_index=True)
unique_species_annotated.head()
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Join the `df_species_annotated_subset` information to the `unique_species` overview of species. Save the result as variable `unique_species_annotated`.
</div>
End of explanation
"""
survey_data_completed = pd.merge(survey_data_plots, unique_species_annotated,
how='left', on= ["genus", "species"])
len(survey_data_completed)
survey_data_completed.head()
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
- Join the `unique_species_annotated` data to the `survey_data_plots` data set, using both the genus and species column as keys. Save the result as the variable `survey_data_completed`.
</div>
End of explanation
"""
survey_data_completed.to_csv("survey_data_completed_.csv", index=False)
"""
Explanation: Congratulations! You did a great cleaning job, save your result:
End of explanation
"""
|
Code-Girls/2016Summer | Day6/code/k-means_clustering.ipynb | mit | # NOTE: we use non-random initializations for the cluster centers
# to make autograding feasible; normally cluster centers would be
# randomly initialized.
data = np.load('data/X.npz')
X = data['X']
centers = data['centers']
print ('X: \n' + str(X))
print ('\ncenters: \n' + str(centers))
"""
Explanation: In this problem we will use $k$-means clustering on a dataset consisting of observations of dogs, cats, and mops. An example observation from each type of object is presented below:
We assume each observation can be represented as a pair of ($x,y$) coordinates, i.e., each object is represented in two-dimensional space. Suppose we have observed some obserations from each type of object, but have lost the information as to which instance belongs to which type!
To try and recover this information we will use an unsupervised learning algorithm called k-means clustering. As you may recall from lecture, the $k$ here refers to how many types of clusters we think exist in the data, and the goal of the algorithm is to assign labels to the data points using their distance to the centers (or means) of the clusters. For this particular problem, we assume $k=3$. After randomly initializing cluster centers,
the algorithm can be broken down into two alternating steps:
Update the label assignments of the data points based on the nearest cluster centers
Update the positions of the cluster centers to reflect the updated assignments of data points.
Before you begin, load the data we will be using. For answering the questions in this problem set, use the centers loaded from the X.npz file below (i.e., do NOT randomly initialize the values yourself - the autograder for this problem relies on a "stock" initialization).
End of explanation
"""
k_means??
"""
Explanation: Also, take a look at the imported functions k_means:
End of explanation
"""
def distance(a, b):
"""
Returns the Euclidean distance between two points,
a and b, in R^2.
Parameters
----------
a, b : numpy arrays of shape (2,)
The (x,y) coordinates for two points, a and b,
in R^2. E.g., a[0] is the x coordinate,
and a[1] is the y coordinate.
Returns
-------
distance : float
The Euclidean distance between a and b
"""
### BEGIN SOLUTION
return np.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
### END SOLUTION
# add your own test cases here!
"""Check distances computes the correct values"""
from numpy.testing import assert_allclose
assert_allclose(distance(np.array([0.0, 0.0]), np.array([0.0, 1.0])), 1.0)
assert_allclose(distance(np.array([3.0, 3.0]), np.array([4.3, 5.0])), 2.3853720883753127)
assert_allclose(distance(np.array([130.0, -25.0]), np.array([0.4, 15.0])), 135.63244449614552)
print("Success!")
"""
Explanation: This is the function you will run in Part C once you have completed the helper functions in parts A and B.
Part A (2 points)
First, we will need a function that gives us the distance between two points. We can use Euclidean distance to compute the distance between two points ($x_1,y_1$) and ($x_2,y_2$). Recall that Euclidean distance in $\mathbb{R}^2$ is calculated as:
$$
distance((x_1,y_1),(x_2,y_2)) = \sqrt{(x_1 - x_2)^{2} + (y_1 - y_2)^{2}}
$$
<div class="alert alert-success">
Complete the `distance` function below to calculate the euclidean distance between two points in $\mathbb{R}^2$.
</div>
End of explanation
"""
def update_assignments(num_clusters, X, centers):
"""
Returns the cluster assignment (number) for each data point
in X, computed as the closest cluster center.
Parameters
----------
num_clusters : int
The number of disjoint clusters (i.e., k) in
the X
X : numpy array of shape (m, 2)
An array of m data points in R^2.
centers : numpy array of shape (num_clusters, 2)
The coordinates for the centers of each cluster
Returns
-------
cluster_assignments : numpy array of shape (m,)
An array containing the cluster label assignments
for each data point in X. Each cluster label is an integer
between 0 and (num_clusters - 1).
"""
### BEGIN SOLUTION
cluster_assignments = []
for x in X:
cluster_assignments.append(np.array([distance(x, c) for c in centers]).argmin())
return np.array(cluster_assignments)
### END SOLUTION
# add your own test cases here!
"""Check update_assignments computes the correct values"""
from nose.tools import assert_equal
from numpy.testing import assert_array_equal
# load the data
data = np.load('data/X.npz')
X = data['X']
# validate update_assignments using different values
actual = update_assignments(2, X, np.array([[3, 2], [1, 4]]))
expected = np.array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0])
# is the output of the correct shape?
assert_equal(actual.shape[0], X.shape[0])
# are the cluster labels correct?
assert_array_equal(expected, actual)
# validate update_assignments using different values
actual = update_assignments(3, X[:X.shape[0]/2], np.array([X[0], X[1], X[2]]))
expected = np.array([0, 1, 2, 2, 0, 2, 1, 2, 2, 2, 0, 0, 0, 0, 0])
# is the output of the correct shape?
assert_equal(actual.shape[0], X.shape[0] / 2)
# are the cluster labels correct?
assert_array_equal(expected, actual)
# check that it uses distance
old_distance = distance
del distance
try:
update_assignments(2, X, np.array([[3, 2], [1, 4]]))
except NameError:
pass
else:
raise AssertionError("update_assignments does not call distance")
finally:
distance = old_distance
del old_distance
print("Success!")
"""
Explanation: <div class="alert alert-success">Now, we will write a function to update the cluster that each point is assigned to by computing the distance to the center of each cluster. Complete the `update_assignments` function to do this using your `distances` function.</div>
End of explanation
"""
def update_parameters(num_clusters, X, cluster_assignment):
"""
Recalculates cluster centers running update_assignments.
Parameters
----------
num_clusters : int
The number of disjoint clusters (i.e., k) in
the X
X : numpy array of shape (m, 2)
An array of m data points in R^2
cluster_assignment : numpy array of shape (m,)
The array of cluster labels assigned to each data
point as returned from update_assignments
Returns
-------
updated_centers : numpy array of shape (num_clusters, 2)
An array containing the new positions for each of
the cluster centers
"""
### BEGIN SOLUTION
updated_centers = []
for i in np.unique(cluster_assignment):
cluster_idx = np.argwhere(cluster_assignment == i).ravel()
updated_centers.append(np.mean(X[cluster_idx,:], axis=0))
return np.asarray(updated_centers)
### END SOLUTION
# add your own test cases here!
"""Check update_parameters computes the correct values"""
from nose.tools import assert_equal
from numpy.testing import assert_allclose
# load the data
data = np.load('data/X.npz')
X = data['X']
# validate update_assignments using different values
cluster_assignment1 = np.array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0])
actual = update_parameters(2, X, cluster_assignment1)
expected = np.array([[ 3.24286584, 2.71362623], [ 2.80577245, 4.07633606]])
assert_allclose(expected, actual)
cluster_assignment2 = np.array([0, 1, 2, 2, 0, 2, 1, 2, 2, 2, 0, 0, 0, 0, 0])
actual = update_parameters(3, X[:X.shape[0]/2], cluster_assignment2)
expected = np.array([[ 3.4914304 , 2.79181724], [ 3.03095255, 2.02958778], [ 2.86686881, 1.76070598]])
assert_allclose(expected, actual, rtol=1e-6)
print("Success!")
"""
Explanation: Part B (1.5 points)
<div class="alert alert-success">Now, we need to do the next step of the clustering algorithm: recompute the cluster centers based on which points are assigned to that cluster. Recall that the new centers are simply the two-dimensional means of each group of data points. A two-dimensional mean is calculated by simply finding the mean of the x coordinates and the mean of the y coordinates. Complete the `update_parameters` function to do this.</div>
End of explanation
"""
# load the edata
data = np.load('data/X.npz')
X = data['X']
centers = data['centers']
# run k-means
cluster_assignments, updated_centers = k_means(3, X, centers, update_assignments, update_parameters, n_iter=4)
"""
Explanation: Part C
At this stage you are ready to run the $k$-means clustering algorithm! The k_means function below will call your functions from Part A and B to run the k-means algorithm on the data points in X. Note that for this problem we assume that $k = 3$.
Call the function as so:
End of explanation
"""
def assign_new_object(new_object, updated_centers):
"""
Returns the cluster label (number) for new_object using k-means
clustering.
Parameters
----------
new_object : numpy array of shape (2,)
The (x,y) coordinates of a new object to be classified
updated_centers : numpy array of shape (num_clusters,2)
An array containing the updated (x,y) coordinates for
each cluster center
Returns
-------
label : int
The cluster label assignment for new_object. This is a
number between 0 and and (num_clusters - 1).
"""
### BEGIN SOLUTION
return np.array([distance(new_object, c) for c in updated_centers]).argmin()
### END SOLUTION
# add your own test cases here!
"""Check assign_new_object computes the correct values"""
from nose.tools import assert_equal
# validate update_assignments using different values
centers1 = np.array([[ 3.17014624, 2.42738134], [ 2.90932354, 4.26426491]])
assert_equal(assign_new_object(np.array([0, 1]), centers1), 0)
assert_equal(assign_new_object(np.array([1, 0]), centers1), 0)
assert_equal(assign_new_object(np.array([3, 2]), centers1), 0)
assert_equal(assign_new_object(np.array([2, 4]), centers1), 1)
centers2 = np.array([[ 3.170146, 2.427381], [ 3.109456, 1.902395], [ 2.964183, 1.827484]])
assert_equal(assign_new_object(np.array([0, 1]), centers2), 2)
assert_equal(assign_new_object(np.array([1, 0]), centers2), 2)
assert_equal(assign_new_object(np.array([3, 2]), centers2), 1)
assert_equal(assign_new_object(np.array([2, 4]), centers2), 0)
# check that it uses distance
old_distance = distance
del distance
try:
update_assignments(2, X, np.array([[3, 2], [1, 4]]))
except NameError:
pass
else:
raise AssertionError("assign_new_object does not call distance")
finally:
distance = old_distance
del old_distance
print("Success!")
"""
Explanation: If the functions you completed above are working properly, you should see a figure containing a subplot of the output from steps (1) and (2) for four iterations of the algorithm. This plot should give you a sense of how the algorithm progresses over time. The data points are each assigned to one of three colors corresponding to their current cluster label. The cluster centers are plotted as stars.
Part D (1 point)
Now that we have assigned cluster labels to each datapoint, let's investigate how we should classify a new object (which we can see is a Shih-Tzu):
<div class="alert alert-success">Complete the function template in `assign_new_object` to determine the appropriate cluster label for this new object.</div>
<div class="alert alert-warning">**Note**: To complete the function, you will need to compute the distance between each cluster center and the new observation. Use the `distance` function from Part A.</div>
End of explanation
"""
# load the edata
data = np.load('data/X.npz')
X = data['X']
centers = data['centers']
# run k-means
cluster_assignments, updated_centers = k_means(3, X, centers, update_assignments, update_parameters, n_iter=4)
"""
Explanation: Part E (1.5 points)
Let's go ahead and rerun $k$-means, to make sure we have the correct variables set:
End of explanation
"""
new_object = np.array([3.3, 3.5]) # image coordinates
label = assign_new_object(new_object, updated_centers)
print ('The new object was assigned to cluster: '+ str(label))
"""
Explanation: Once you've implemented assign_new_object, give it a spin on the image of the Shih-Tzu:
End of explanation
"""
plot_final(X, cluster_assignments, updated_centers, new_object, assign_new_object)
"""
Explanation: Finally, we can visualize this result against the true assignments using the helper function plot_final:
End of explanation
"""
|
NelisW/ComputationalRadiometry | 12g-Plume-texture.ipynb | mpl-2.0 | # to prepare the environment
import numpy as np
import scipy as sp
import pandas as pd
import os.path
from scipy.optimize import curve_fit
from scipy import interpolate
from scipy import integrate
from scipy import signal
from scipy import ndimage
import matplotlib.pyplot as plt
import scipy.constants as const
import pickle
import collections
%matplotlib inline
# %reload_ext autoreload
# %autoreload 2
import pyradi.ryplot as ryplot
import pyradi.ryplanck as ryplanck
import pyradi.ryfiles as ryfiles
import pyradi.rymodtran as rymodtran
import pyradi.ryutils as ryutils
from IPython.display import HTML
from IPython.display import Image
from IPython.display import display
from IPython.display import FileLink, FileLinks
import matplotlib as mpl
mpl.rc("savefig", dpi=150)
mpl.rc('figure', figsize=(10,8))
# %config InlineBackend.figure_format = 'svg'
# %config InlineBackend.figure_format = 'pdf'
pim = ryplot.ProcessImage()
pd.set_option('display.max_columns', 80)
pd.set_option('display.width', 100)
pd.set_option('display.max_colwidth', 150)
"""
Explanation: Creating a plume radiance texture map
This notebook forms part of a series on computational optical radiometry. The notebooks can be downloaded from Github. These notebooks are constantly revised and updated, please revisit from time to time.
<img src="https://zenodo.org/badge/doi/10.5281/zenodo.9910.png" align="left"/>
The date of this document and module versions used in this document are given at the end of the file.
Feedback is appreciated: neliswillers at gmail dot com.
Introduction
This notebook describes a method to create a radiance texture map of a CO$_\textrm{2}$ aircraft engine plume in the MWIR band, starting from a temperature profile in graph format. In this particular case study the temperature information is shown as spatial iso-temperature lines. Plume radiance is three dimensional physical phenomenon but for this analysis it is simplified to a two dimensional projection of the radiance.
The steps in this process are as follows:
Scan the graph to determine the $(d,r)$ coordinates for each of the iso-temperature lines. These coordinates are not in a regular grid.
Interpolate the scanned temperature profiles from the scattered coordinates into a regular grid. The regular grid intervals can be set according to the user need. In this case a relatively dense grid is used to obtain an image of the plume radiance.
Calculate the radiance for each of the pixels, using the interpolated temperature values. The radiance is calculated using spectral CO$\textrm{2}$ emissivity values. Spectral emissivity shape depends on the temperature and pressure of the exhaust gas. Accurate modelling therefore requires pressure profiles (in addition to the temperature profile), and also requires an accurate model of the spectral emissivity as function of temperature and pressure. The radiance in each pixel is calculated using
\begin{equation}
L(d,r) = \epsilon(T,P,d,r) L\textrm{bb}(T)
\end{equation}
where
$\epsilon(T,Pd,r)$ is the emissivity at location $(d,r)$ at temperature $T$ and pressure $P$, and
$L_\textrm{bb}(T)$ is the Planck-law radiance at temperature $T$. This calculation does not attempt to calculate the emissivity from first principles, instead measured information is used to model the emissivity at different temperatures (disregarding pressure effects). To obtain the wideband spectral radiance, the spectral radiance must then be integrated over the sensor spectral bandwidth.
\begin{equation}
L_\textrm{plume}(d,r) = \int_{\Delta\lambda}\tau_\textrm{S}\cdot\epsilon_\textrm{plume}(T_\textrm{plume},d,r)\cdot L_\textrm{bb}(T_\textrm{plume},d,r) \textrm{d} \lambda
\end{equation}
where
$\tau_\textrm{S}$ is the sensor spectral response,
$\epsilon_\textrm{plume}(T_\textrm{plume},d,r)$ is the plume spectral emissivity at temperature $T$ in pixel $(d,r)$, and
$L(T_\textrm{plume},d,r)$ is the Planck-law radiance at temperature $T$ pixel $(d,r)$.
In our intended application the emissivity texture will be used on a single polygon with a single temperature. The radiance map is then used to calculate a scaling texture map, normalised to the maximum temperature.
\begin{equation}
s(d,r) = L_\textrm{plume}(d,r) / L_\textrm{bb}(T_\textrm{max}).
\end{equation}
Of course this an approximation, but it is sufficient for the current need.
Set up Python environment
End of explanation
"""
def extractGraph(filename, xmin, xmax, ymin, ymax, outfile=None,doPlot=False,\
xaxisLog=False, yaxisLog=False, step=None, value=None):
"""Scan an image containing graph lines and produce (x,y,value) data.
This function processes an image, calculate the location of pixels on a
graph line, and then scale the (r,c) or (x,y) values of pixels with non-zero
values. The
Get a bitmap of the graph (scan or screen capture).
Take care to make the graph x and y axes horizontal/vertical.
The current version of the software does not work with rotated images.
Bitmap edit the graph. Clean the graph to the maximum extent possible,
by removing all the clutter, such that only the line to be scanned is visible.
Crop only the central block that contains the graph box, by deleting
the x and y axes notation and other clutter. The size of the cropped image
must cover the range in x and y values you want to cover in the scan. The
graph image/box must be cut out such that the x and y axes min and max
correspond exactly with the edges of the bitmap.
You must end up with nothing in the image except the line you want
to digitize.
The current version only handles single lines on the graph, but it does
handle vertical and horizontal lines.
The function can also write out a value associated with the (x,y) coordinates
of the graph, as the third column. Normally these would have all the same
value if the line represents an iso value.
The x,y axes can be lin/lin, lin/log, log/lin or log/log, set the flags.
Args:
| filename: name of the image file
| xmin: the value corresponding to the left side (column=0)
| xmax: the value corresponding to the right side (column=max)
| ymin: the value corresponding to the bottom side (row=bottom)
| ymax: the value corresponding to the top side (row=top)
| outfile: write the sampled points to this output file
| doPlot: plot the digitised graph for visual validation
| xaxisLog: x-axis is in log10 scale (min max are log values)
| yaxisLog: y-axis is in log10 scale (min max are log values)
| step: if not None only ouput every step values
| value: if not None, write this value as the value column
Returns:
| outA: a numpy array with columns (xval, yval, value)
| side effect: a file may be written
| side effect: a graph may be displayed
Raises:
| No exception is raised.
Author: neliswillers@gmail.com
"""
from scipy import ndimage
from skimage.morphology import medial_axis
if doPlot:
import pylab
import matplotlib.pyplot as pyplot
#read image file, as grey scale
img = ndimage.imread(filename, True)
# find threshold 50% up the way
halflevel = img.min() + (img.max()-img.min()) /2
# form binary image by thresholding
img = img < halflevel
#find the skeleton one pixel wide
imgskel = medial_axis(img)
#if doPlot:
# pylab.imshow(imgskel)
# pylab.gray()
# pylab.show()
# set up indices arrays to get x and y indices
ind = np.indices(img.shape)
#skeletonise the graph to one pixel only
#then get the y pixel value, using indices
yval = ind[0,...] * imgskel.astype(float)
#if doPlot:
# pylab.imshow(yval>0)
# pylab.gray()
# pylab.show()
# invert y-axis origin from left top to left bottom
yval = yval.shape[0] - np.max(yval, axis=0)
#get indices for only the pixels where we have data
wantedIdx = np.where(np.sum(imgskel, axis = 0) > 0)
# convert to original graph coordinates
cvec = np.arange(0.0,img.shape[1])
xval = xmin + (cvec[wantedIdx] / img.shape[1]) * (xmax - xmin)
xval = xval.reshape(-1,1)
yval = ymin + (yval[wantedIdx] / img.shape[0]) * (ymax - ymin)
yval = yval.reshape(-1,1)
if xaxisLog:
xval = 10** xval
if yaxisLog:
yval = 10 ** yval
#build the result array
outA = np.hstack((xval,yval))
if value is not None:
outA = np.hstack((outA,value*np.ones(yval.shape)))
# process step intervals
if step is not None:
# collect the first value, every step'th value, and last value
outA = np.vstack((outA[0,:],outA[1:-2:step,:],outA[-1,:]))
#write output file
if outfile is not None > 0 :
np.savetxt(outfile,outA)
if doPlot:
fig = pyplot.figure()
ax=fig.add_subplot(1,1,1)
ax.plot(xval,yval)
if xaxisLog:
ax.set_xscale('log')
if yaxisLog:
ax.set_yscale('log')
pylab.show()
return outA
# to digitise one graph line
def prepareDataPoints(dicLines, gscale):
for i,item in enumerate(dicLines.keys()):
out = extractGraph(item, gscale[0], gscale[1], gscale[2],gscale[3],
None, False, False, False, step=dicLines[item][0], value=dicLines[item][1] )
#convert to Kelvin
out[:,2] += 273
if i==0:
outA = out
else:
outA = np.vstack((outA, out))
return outA
"""
Explanation: Scan the temperature profile
The temperature profile is given in the form of a few graphs of iso-temperature values, redrawn as shown below. It is assumed that these values represent temperatures in the plume as would be measured by a thermocouple (gas temperature). For simplicity it is assumed that the plume is rotationally symmetrical.
Digitising data from a graph is a tiresome and inherently error-prone task. Manual (eyeball) reading the values is not suited for complex graphs, such as spectroscopy data. For this purpose an alternative method was developed to scan an image of the graph using a short Python script. The code is quite basic and therefore the procedure requires some manual preparatory work:
Obtain a PNG image (JPEG may have too much noise) of the graph by scanning or screen grab (Snipping tool on Windows).
Correct the image by removing any warping, skewing or rotation. The end result should present the axes of the graph in a regular rectangle. This task can be completed by using tools such as Photoshop, PrintShopPro, Gimp, ImageMagick or similar.
Remove all clutter from the image that should not appear in the scan. This clutter includes grid lines and all notation on the graph. Retain only the graph to be digitised, because the code is looking for non-zero pixels in the image.
Crop the image such that only the data portion of the graph is present in the image, in other words, the bounds of the image should correspond with the minimum and maximum values of the graph. The image bounds are used to scale the sampled values later. At this point the image should be sized to hold the graph line, and the $(d,r)$ values of the image edges must be known.
Use the Python digitising function extractGraph to extract the scaled coordinates of the sampled image pixels in a text file.
Instead of processing the graph image on bitmap pixel level, some graphs can be redrawn in a tool such as CorelDraw or Inscape. Import the image and manually draw the lines with Bezier curves. Once the line is captured in vector form, export a bitmap with this line, but with an image size required by the graph axes. In CorelDraw I do this by drawing the graph frame as a rectangle on the lower and upper $(d,r)$ graph limits, but making the rectangle line zero width, so as to not show in the image. Of course, this approach is only feasible for simple graphs. This approach was used below.
For alternative means to digitise graphs, see this post, this tool, or this tool.
One of these 'clean-up' bitmaps is shown below. Note that there is no other marks in the image than the line to be scanned. The size of the image corresponds to the graph frame, which in turn corresponds to the graph scale values for $x$ and $y$.
Note that the lines should have however small but different $(r,d)$ values, even if they appear to be falling on each other (see the figure below). This requirement stems from the fact that all the sample points are eventually combined into a single data set. Values with the same $(r,d)$ values, but different temperature values, lead to ambiguity in subsequent interpolation. The value last presented in the data array will be used and this may lead to incorrect interpolation. Essentially we must help the interpolation algorithm by ensuring that valid gradients may be calculated. High or infinite gradients can be modelled by shifting the $(r,d)$ location a minute distance from each other, such as to present numerically feasible and stable gradients.
The graphs are sampled into a scattered set of $(d,r,T)$ coordinates. These coordinate sets are sampled on the lines in the graphs above and are not in a regular grid (the next step creates datasets on regular grids). The pyradi.ryutils.extractGraph function digitises the lines in the above graphs, one line at a time, creating the scatter data set. Subsequent interpolation requires that the full graph domain (distance from tailpipe) and range (radial distance) be covered, hence the graphs above show values for the full $d$ domain.
End of explanation
"""
#to digitise all the lines in all the graphs, and create one data structure for all data
datasets = {
'Plume' :[{
'data/plume-015a.PNG':[25,15],
'data/plume-015b.PNG':[15,15],
'data/plume-032.PNG':[15,32],
'data/plume-060.PNG':[15,60],
'data/plume-116.PNG':[10,116],
'data/plume-171.PNG':[5,171],
'data/plume-282.PNG':[3,282],
'data/plume-394a.PNG':[3,394],
'data/plume-394b.PNG':[3,394],
},[0, 85.3, 0, 3.048] ],
}
for key in datasets.keys():
datasets[key].append(prepareDataPoints(datasets[key][0],datasets[key][1]))
print(datasets[key][2].shape)
# to plot the sampled temperature iso lines
def plotScatterDataSet(dataset,key, ikey):
z = dataset[2]
p = ryplot.Plotter(ikey,1,1,figsize=(12,3));
p.plot(1,z[:,0], z[:,1], xlabel='Distance from tail pipe m', ylabel='Radial distance m',
ptitle='Samples: {}, temperature K'.format(key),markers=['.'],linestyle='' );
for ikey,key in enumerate(datasets.keys()):
plotScatterDataSet(datasets[key],key, ikey)
"""
Explanation: A dictionary stores the digitised data. The dictionary keys are the descriptions of the different data set. Each dictionary entry has three entries: (1) a list of the bitmaps for each of the lines, and the sample interval and iso-temperature (in degrees C) for the line, (2) the graph domain and range limits in metres, and (3) the scatter data set as digitised.
The quality of the subsequent grid sampling depends on the fineness of the scattered data. On the other hand, too dense sampling creates too large a data set. By experiment the optimal number of samples were found, as shown below.
End of explanation
"""
# to interpolate the temperature data to a regular grid
def interpolateDataSet(dataset,key, ikey, numDsamples, numRsamples):
# get the scatter data set
z = dataset[2]
#mirror around the centre line
zn = z.copy()
zn[:,1] = -zn[:,1]
z = np.append(z,zn,axis=0)
#mirror around the tail pipe
zn = z.copy()
zn[:,0] = -zn[:,0]
z = np.append(z,zn,axis=0)
#create the regular grid , complex j signifies the number of samples, not step size
xnew, ynew = np.mgrid[-dataset[1][1]:dataset[1][1]:numDsamples * 1j, -dataset[1][3]:dataset[1][3]:numRsamples * 1j]
#interpolate
znew = interpolate.griddata(z[:,0:-1], z[:,-1], (xnew, ynew), method='linear', rescale=True)
# select only upper-right quadrant, behind tailpipe and above centreline
select = xnew.__ge__(0) & xnew.__le__(80.) & ynew.__ge__(0)
#number of selected points along the plume centreline:
numd = np.sum(select[:, -1])
#select and reshape back into 2D
xnew = xnew[select].reshape(numd,-1)
ynew = ynew[select].reshape(numd,-1)
znew = znew[select].reshape(numd,-1)
p = ryplot.Plotter(ikey,1,1,figsize=(12,3));
p.meshContour(1, xnew, ynew, znew, xlabel='Distance from tail pipe $d$ m', ylabel='Radial distance $r$ m',
ptitle='{}, temperature K'.format(key), levels=100, contourLine=False, cbarshow=True);
# remove the plot frame to not obscure data
cp = p.getSubPlot(1)
cp.spines['top'].set_visible(False)
cp.spines['right'].set_visible(False)
cp.spines['bottom'].set_visible(False)
cp.spines['left'].set_visible(False)
return xnew, ynew, znew
for ikey,key in enumerate(datasets.keys()):
resolution = 0.1 # metre
numd = 2 * np.max(datasets[key][2][:,0]) / resolution
numr = 2 * np.max(datasets[key][2][:,1]) / resolution
xnew, ynew, znew = interpolateDataSet(datasets[key],key, ikey, numd, numr)
datasets[key].append([xnew, ynew, znew])
# at this point the data set dictionary has been appended to, and contains the following:
# datasets[key][0]: list of the input graph lines
# datasets[key][1]: list with the graph domain and range coverage
# datasets[key][2]: single numpy array with _all_ the sampled points (d,r,T)
# datasets[key][3]: list of arrays with regular grid data [x-mesh-grid, y-mesh-grid, temperature]
"""
Explanation: Interpolate the scanned temperature profiles
The scattered point digitised data sets must now be interpolated and resampled on a regular grid. This is done using the scipy.interpolate.griddata function. To create symmetry in the boundary conditions, the scatter data set is mirrored along both axes. Linear interpolation works the best, cubic interpolation results in too many artefacts (possibly because of the sparse input data). After interpolation only the rear half, behinf the tailpipe, is retained.
End of explanation
"""
# to load and plot the sensor responses
sensors = ['Unity3_5.txt']
titles = ['MWIR']
q = ryplot.Plotter(1,1,1,figsize=(12,4))
sensorSpcs = {}
for i, (sensor, title) in enumerate(zip(sensors, titles) ):
sdata = np.loadtxt('data/{}'.format(sensor),comments='%')
sensorSpcs[sensor] = [sdata]
wn = sensorSpcs[sensor][0][:,1]
taus = sensorSpcs[sensor][0][:,2]
q.plot(1,wn,taus,xlabel='Wavenumber cm$^{-1}$', ylabel='Response', ptitle='Sensor response',
pltaxis=[500, 4000, 0, 1.1],label=[title],plotCol=[q.plotCol[i]])
"""
Explanation: At this point the temperatures are available on a fine regular grid as a function of down stream distance and radial displacement from the centre line. The next step is to convert this to radiance. All imported and calculated data are stored in the datasets dictionary, for reuse later.
Sensor spectral response
The sensor spectral responses are used to support spectral integration.
End of explanation
"""
#to digitise emissivity graphs and fit curves to emissivity model
def prepareEmis(dicLines, gscale):
for i,item in enumerate(dicLines.keys()):
out = ryutils.extractGraph(item, gscale[0], gscale[1], gscale[2],gscale[3],
None, False, False, False, step=dicLines[item][0], value=dicLines[item][1] )
if i==0:
outA = out
else:
outA = np.vstack((outA, out))
return outA
emissets = {
300: [
{'data/emis-MWIR-300K.png':[10,None]},
[1600., 3400, 0., 1.0], [2335, 80]
],
800: [
{'data/emis-MWIR-800K.png':[10,None]},
[1600., 3400, 0., 1.0], [2308, 191]
],
}
p = ryplot.Plotter(1,1,1,figsize=(12,4));
for ikey,key in enumerate(emissets.keys()):
emissets[key].append(prepareEmis(emissets[key][0],emissets[key][1]))
p.plot(1,emissets[key][3][:,0],emissets[key][3][:,1],label=['Emissivity {} K eyeball estimate'.format(key)]);
emisF =ryutils.sfilter(emissets[key][3][:,0],center=emissets[key][2][0],
width=emissets[key][2][1], exponent=3, taupass=1, taustop=0.)
p.plot(1,emissets[key][3][:,0],emisF,
label=[r'Emissivity {} K fitted: $\tilde{{\nu}}_c$ {} $\Delta\tilde{{\nu}}$ {} cm$^{{-1}}$ '.format(key,emissets[key][2][0],emissets[key][2][1] )]);
emis = np.loadtxt('data/emis-800K-MWIR.txt')
p.plot(1, emis[:,1],emis[:,2],'Emissivity; measured and model','Wavenumber cm$^{-1}$',
'Emissivity -',label=['Measured value at 800 K, near tailpipe']) ;
# to calculate spectral emissivity at different temperatures
def emistemp(emissets, wn, temperature):
wncent = np.interp(temperature, np.asarray([300,800]), np.asarray([emissets[300][2][0],emissets[800][2][0]]))
wnwid = np.interp(temperature, np.asarray([300,800]), np.asarray([emissets[300][2][1],emissets[800][2][1]]))
emisF =ryutils.sfilter(wn,center=wncent,width=wnwid, exponent=3, taupass=1, taustop=0.)
return emisF, wncent, wnwid
def emisMWIR(wn, temperature):
emisF,_,_ = emistemp(emissets, wn, temperature)
return emisF
emisFnLU = {'MWIR': emisMWIR}
wn = emissets[300][3][:,0]
p = ryplot.Plotter(1,1,1,figsize=(12,4));
for temperature in [300, 400, 500, 600, 700, 800]:
emisF, wncent,wnwid = emistemp(emissets, wn, temperature)
p.plot(1,wn,emisF,'Emissivity model','Wavenumber cm$^{-1}$','Emissivity -',
label=[r'Emissivity {} K fitted: $\tilde{{\nu}}_c$ {} $\Delta\tilde{{\nu}}$ {} cm$^{{-1}}$ '.format(temperature,wncent,wnwid )]);
# emisF = emisFnLU['MWIR'](wn, temperature)
# p.plot(1,wn,emisF,'Emissivity model','Wavenumber cm$^{-1}$',
# 'Emissivity -',label=[r'Emissivity {} K fitted'.format(temperature)]);
"""
Explanation: Plume emissivity
In the code used for emissivity calculation, a dictionary of function names is used to calculate the spectral emissivity. The dictionary uses the spectral band (NIR,SWIR,MWIR,LWIR) as key and takes wavenumber and temperature as variables. The code is implemented and used as shown in the following pseudocode:
def emisLWIR(wn, temperature=None):
.....
return emiss
def emisMWIR(wn, temperature=None):
.....
return emiss
emisFnLU = {'LWIR': emisLWIR, 'MWIR': emisMWIR}
When a spectral emissivity is required it is calculated as follows:
wn = np.linspace(...)
emislw = emisFnLU['LWIR'](wn,300)
emismw = emisFnLU['MWIR'](wn,300)
Plume temperature is only used in the MWIR calculation and ignored in the other spectral bands.
MWIR plume spectral emisivity
The plume spectral emissivity is determined as follows:
A measured emissivity near the tail pipe for a jet engine at 800 K is available (red line in the graph below). This curve shows the pressure and temperature broadened emissivity over a distance of 100 m through the atmosphere. From this graph two important extreme values for emissivity can be determined: the emissivity at 800 K, given by the outer shape of the curve and the emissivity at near-atmospheric temperatures given by the atmospheric attenuation (the inner shape).
Determine approximate shapes for the 800 K and 300 K gas masses (see the graph below).
Fit a curve to the two extreme emissivity shapes and determine the centre and width of these shapes.
Given the centre and width at the two extremes calculate the emissivity shape by linear interpolation of the centre and width at the required temperature. The exact law to be used for this interpolation is not known and a simple linear interpolation method is used.
Given the temperature-derived centre and width, calculate the emissivity at that tempereture and use for radiance calculation.
End of explanation
"""
# to get software versions
# https://github.com/rasbt/watermark
# you only need to do this once
# pip install watermark
%load_ext watermark
%watermark -v -m -p numpy,scipy,pyradi -g
"""
Explanation: Plume radiance calculation
\begin{equation}
L_\textrm{plume}(d,r) = \int_{\Delta\lambda}\tau_\textrm{S}\cdot\epsilon_\textrm{plume}(T_\textrm{plume},d,r)\cdot L_\textrm{bb}(T_\textrm{plume},d,r) \textrm{d} \lambda
\end{equation}
At this point we have the sensor response, models for emissivity and the plume spatial temperature distribution. The spectral integral can now be calculated for each pixel. Observe that the pixel location only provides temperature and that the integral can be simplified into the following steps:
Calculate this integral and create a lookup table of temperature to radiance.
\begin{equation}
f_{L_\textrm{plume}}: T_\textrm{plume} \rightarrow \int_{\Delta\lambda}\tau_\textrm{S}\cdot\epsilon_\textrm{plume}(T_\textrm{plume})\cdot L_\textrm{bb}(T_\textrm{plume}) \textrm{d} \lambda
\end{equation}
Look up the temperature at location $(d,r)$, and then look up the radiance at this temperature
\begin{equation}
f_{T_\textrm{plume}}: (d,r) \rightarrow T_\textrm{plume}
\end{equation}
\begin{equation}
L_\textrm{plume}(d,r) = f_{L_\textrm{plume}}(f_{T_\textrm{plume}}(d,r) )
\end{equation}
To be completed ....
Python and module versions, and dates
End of explanation
"""
|
merryjman/astronomy | templateGraphing.ipynb | gpl-3.0 | # import software packages
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
inline_rc = dict(mpl.rcParams)
"""
Explanation: Data Analysis Template
This notebook is a template for data analysis and includes some useful code for calculations and plotting.
End of explanation
"""
# enter column labels and raw data (with same # of values)
table1 = pd.DataFrame.from_items([
('column1', [0,1,2,3]),
('column2', [0,2,4,6])
])
# display data table
table1
"""
Explanation: Raw data
This is an example of making a data table.
End of explanation
"""
# Uncomment the next line to make your graphs look like xkcd.com
#plt.xkcd()
# to make normal-looking plots again execute:
#mpl.rcParams.update(inline_rc)
# set variables = data['column label']
x = table1['column1']
y = table1['column2']
# this makes a scatterplot of the data
# plt.scatter(x values, y values)
plt.scatter(x, y)
plt.title("?")
plt.xlabel("?")
plt.ylabel("?")
plt.autoscale(tight=True)
# calculate a trendline equation
# np.polyfit( x values, y values, polynomial order)
trend1 = np.polyfit(x, y, 1)
# plot trendline
# plt.plot(x values, y values, other parameters)
plt.plot(x, np.poly1d(trend1)(x), label='trendline')
plt.legend(loc='upper left')
# display the trendline's coefficients (slope, y-int)
trend1
"""
Explanation: Plotting
End of explanation
"""
# create a new empty column
table1['column3'] = ''
table1
"""
Explanation: Do calculations with the data
End of explanation
"""
# np.diff() calculates the difference between a value and the one after it
z = np.diff(x)
# fill column 3 with values from the formula (z) above:
table1['column3'] = pd.DataFrame.from_items([('', z)])
# display the data table
table1
# NaN and Inf values cause problems with math and plotting.
# Make a new table using only selected rows and columns
table2 = table1.loc[0:2,['column1', 'column2', 'column3']] # this keeps rows 0 through 2
table2
# set new variables to plot
x2 = table2['column1']
y2 = table2['column3']
"""
Explanation: Here's an example of calculating the difference between the values in column 2:
End of explanation
"""
# code for plotting table2 can go here
"""
Explanation: Now you can copy the code above to plot your new data table.
End of explanation
"""
|
hankcs/HanLP | plugins/hanlp_demo/hanlp_demo/zh/tutorial.ipynb | apache-2.0 | !pip install hanlp_restful
"""
Explanation: 欢迎来到HanLP在线交互环境,这是一个Jupyter记事本,可以输入任意Python代码并在线执行。请点击左上角【Run】来运行这篇NLP教程。
安装
量体裁衣,HanLP提供RESTful(云端)和native(本地)两种API,分别面向轻量级和海量级两种场景。无论何种API何种语言,HanLP接口在语义上保持一致,你可以任选一种API来运行本教程。
轻量级RESTful API
仅数KB,适合敏捷开发、移动APP等场景。简单易用,无需GPU配环境,强烈推荐,秒速安装:
End of explanation
"""
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
"""
Explanation: 创建客户端,填入服务器地址:
End of explanation
"""
doc = HanLP.parse("2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。阿婆主来到北京立方庭参观自然语义科技公司。")
print(doc)
"""
Explanation: 调用parse接口,传入一篇文章,得到HanLP精准的分析结果。
End of explanation
"""
doc.pretty_print()
"""
Explanation: 可视化
输出结果是一个可以json化的dict,键为NLP任务名,值为分析结果。关于标注集含义,请参考《语言学标注规范》及《格式规范》。我们购买、标注或采用了世界上量级最大、种类最多的语料库用于联合多语种多任务学习,所以HanLP的标注集也是覆盖面最广的。通过doc.pretty_print,可以在等宽字体环境中得到可视化,你需要取消换行才能对齐可视化结果。我们已经发布HTML环境的可视化,在Jupyter Notebook中自动对齐中文。
End of explanation
"""
!pip install hanlp -U
"""
Explanation: 申请秘钥
由于服务器算力有限,匿名用户每分钟限2次调用。如果你需要更多调用次数,建议申请免费公益API秘钥auth。
海量级native API
依赖PyTorch、TensorFlow等深度学习技术,适合专业NLP工程师、研究者以及本地海量数据场景。要求Python 3.6以上,支持Windows,推荐*nix。可以在CPU上运行,推荐GPU/TPU。
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定。
End of explanation
"""
import hanlp
hanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库
"""
Explanation: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
End of explanation
"""
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH)
"""
Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务:
End of explanation
"""
doc = HanLP(['2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', '阿婆主来到北京立方庭参观自然语义科技公司。'])
print(doc)
"""
Explanation: 多任务批量分析
客户端创建完毕,或者模型加载完毕后,就可以传入一个或多个句子进行分析了:
End of explanation
"""
doc.pretty_print()
"""
Explanation: 可视化
输出结果是一个可以json化的dict,键为NLP任务名,值为分析结果。关于标注集含义,请参考《语言学标注规范》及《格式规范》。我们购买、标注或采用了世界上量级最大、种类最多的语料库用于联合多语种多任务学习,所以HanLP的标注集也是覆盖面最广的。通过doc.pretty_print,可以在等宽字体环境中得到可视化,你需要取消换行才能对齐可视化结果。我们已经发布HTML环境的可视化,在Jupyter Notebook中自动对齐中文。
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()
"""
Explanation: 指定任务
简洁的接口也支持灵活的参数,任务越少,速度越快。如指定仅执行分词:
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()
"""
Explanation: 执行粗颗粒度分词
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='pos/pku').pretty_print()
"""
Explanation: 执行分词和PKU词性标注
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks=['tok/coarse', 'pos/pku'], skip_tasks='tok/fine').pretty_print()
"""
Explanation: 执行粗颗粒度分词和PKU词性标注
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='ner/msra').pretty_print()
"""
Explanation: 执行分词和MSRA标准NER
End of explanation
"""
doc = HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks=['pos', 'dep'])
doc.pretty_print()
"""
Explanation: 执行分词、词性标注和依存句法分析
End of explanation
"""
print(doc.to_conll())
"""
Explanation: 转换为CoNLL格式:
End of explanation
"""
doc = HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks=['pos', 'con'])
doc.pretty_print()
"""
Explanation: 执行分词、词性标注和短语成分分析
End of explanation
"""
print(doc['con']) # str(doc['con'])会将短语结构列表转换为括号形式
"""
Explanation: 将短语结构树以bracketed形式打印
End of explanation
"""
ja = hanlp.load(hanlp.pretrained.mtl.NPCMJ_UD_KYOTO_TOK_POS_CON_BERT_BASE_CHAR_JA)
ja(['2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。',
'奈須きのこは1973年11月28日に千葉県円空山で生まれ、ゲーム制作会社「ノーツ」の設立者だ。',]).pretty_print()
"""
Explanation: 关于标注集含义,请参考《语言学标注规范》及《格式规范》。我们购买、标注或采用了世界上量级最大、种类最多的语料库用于联合多语种多任务学习,所以HanLP的标注集也是覆盖面最广的。
多语种支持
总之,可以通过tasks参数灵活调用各种NLP任务。除了中文联合模型之外,你可以在文档中通过找到许多其他语种的模型,比如日语:
End of explanation
"""
from hanlp.utils.torch_util import gpus_available
if gpus_available():
mul = hanlp.load(hanlp.pretrained.mtl.UD_ONTONOTES_TOK_POS_LEM_FEA_NER_SRL_DEP_SDP_CON_XLMR_BASE)
mul(['In 2021, HanLPv2.1 delivers state-of-the-art multilingual NLP techniques to production environments.',
'2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。',
'2021年 HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。']).pretty_print()
else:
print(f'建议在GPU环境中运行XLMR_BASE。')
"""
Explanation: 以及支持104种语言的多语种联合模型:
End of explanation
"""
|
mathLab/RBniCS | tutorials/07_nonlinear_elliptic/tutorial_nonlinear_elliptic_deim.ipynb | lgpl-3.0 | from dolfin import *
from rbnics import *
"""
Explanation: Tutorial 07 - Non linear Elliptic problem
Keywords: DEIM, POD-Galerkin
1. Introduction
In this tutorial, we consider a non linear elliptic problem in a two-dimensional spatial domain $\Omega=(0,1)^2$. We impose a homogeneous Dirichlet condition on the boundary $\partial\Omega$. The source term is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = 100\sin(2\pi x_0)cos(2\pi x_1) \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
This problem is characterized by two parameters. The first parameter $\mu_0$ controls the strength of the sink term and the second parameter $\mu_1$ the strength of the nonlinearity. The range of the two parameters is the following:
$$
\mu_0,\mu_1\in[0.01,10.0]
$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
on the parameter domain
$$
\mathbb{P}=[0.01,10]^2.
$$
In order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method. In order to preserve the affinity assumption the discrete empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the solution in the domain $\Omega$.
The strong formulation of the parametrized problem is given by:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})$ such that</center>
$$ -\nabla^2u(\boldsymbol{\mu})+\frac{\mu_0}{\mu_1}(\exp{\mu_1u(\boldsymbol{\mu})}-1)=g(\boldsymbol{x}; \boldsymbol{\mu})$$
<br>
The corresponding weak formulation reads:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)+c\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in H_1(\Omega) : v|_{\partial\Omega}=0}
$$
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u, v;\boldsymbol{\mu})=\int_{\Omega} \nabla u\cdot \nabla v \ d\boldsymbol{x},$$
the parametrized bilinear form $c(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v;\boldsymbol{\mu})=\mu_0\int_{\Omega} \frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x},$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v; \boldsymbol{\mu})= \int_{\Omega}g(\boldsymbol{x}; \boldsymbol{\mu})v \ d\boldsymbol{x}.$$
The output of interest $s(\boldsymbol{\mu})$ is given by
$$s(\boldsymbol{\mu}) = \int_{\Omega} v \ d\boldsymbol{x}$$
is computed for each $\boldsymbol{\mu}$.
End of explanation
"""
@DEIM("online", basis_generation="Greedy")
@ExactParametrizedFunctions("offline")
class NonlinearElliptic(NonlinearEllipticProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearEllipticProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
self.u = self._solution
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Store the forcing term expression
self.f = Expression("sin(2*pi*x[0])*sin(2*pi*x[1])", element=self.V.ufl_element())
# Customize nonlinear solver parameters
self._nonlinear_solver_parameters.update({
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
})
# Return custom problem name
def name(self):
return "NonlinearEllipticDEIM"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = 1.
return (theta_a0,)
elif term == "c":
theta_c0 = mu[0]
return (theta_c0,)
elif term == "f":
theta_f0 = 100.
return (theta_f0,)
elif term == "s":
theta_s0 = 1.0
return (theta_s0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_derivatives
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
du = self.du
a0 = inner(grad(du), grad(v)) * dx
return (a0,)
elif term == "c":
u = self.u
mu = self.mu
c0 = (exp(mu[1] * u) - 1) / mu[1] * v * dx
return (c0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "s":
s0 = v * dx
return (s0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1)]
return (bc0,)
elif term == "inner_product":
du = self.du
x0 = inner(grad(du), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearEllipticProblem)
def CustomizeReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
class ReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearElliptic_Base.__init__(self, truth_problem, **kwargs)
self._nonlinear_solver_parameters.update({
"report": True,
"line_search": "wolfe"
})
return ReducedNonlinearElliptic
"""
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward:
$$a(u,v;\boldsymbol{\mu})=\underbrace{1}{\Theta^{a}_0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\nabla u \cdot \nabla v \ d\boldsymbol{x}}{a_0(u,v)},$$
$$c(u,v;\boldsymbol{\mu})=\underbrace{\mu_0}{\Theta^{c}0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x}}{c_0(u,v)},$$
$$f(v; \boldsymbol{\mu}) = \underbrace{100}{\Theta^{f}0(\boldsymbol{\mu})} \underbrace{\int{\Omega}\sin(2\pi x_0)cos(2\pi x_1)v \ d\boldsymbol{x}}{f_0(v)}.$$
We will implement the numerical discretization of the problem in the class
class NonlinearElliptic(NonlinearEllipticProblem):
by specifying the coefficients $\Theta^{a}(\boldsymbol{\mu})$, $\Theta^{c}_(\boldsymbol{\mu})$ and $\Theta^{f}(\boldsymbol{\mu})$ in the method
def compute_theta(self, term):
and the bilinear forms $a_(u, v)$, $c(u, v)$ and linear forms $f_(v)$ in
def assemble_operator(self, term):
End of explanation
"""
mesh = Mesh("data/square.xml")
subdomains = MeshFunction("size_t", mesh, "data/square_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/square_facet_region.xml")
"""
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
"""
V = FunctionSpace(mesh, "Lagrange", 1)
"""
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
"""
problem = NonlinearElliptic(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.01, 10.0), (0.01, 10.0)]
problem.set_mu_range(mu_range)
"""
Explanation: 4.3. Allocate an object of the NonlinearElliptic class
End of explanation
"""
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20, DEIM=21)
reduction_method.set_tolerance(1e-8, DEIM=1e-4)
"""
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
"""
reduction_method.initialize_training_set(50, DEIM=60)
reduced_problem = reduction_method.offline()
"""
Explanation: 4.5. Perform the offline phase
End of explanation
"""
online_mu = (0.3, 9.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
"""
Explanation: 4.6. Perform an online solve
End of explanation
"""
reduction_method.initialize_testing_set(50, DEIM=60)
reduction_method.error_analysis()
"""
Explanation: 4.7. Perform an error analysis
End of explanation
"""
reduction_method.speedup_analysis()
"""
Explanation: 4.8. Perform a speedup analysis
End of explanation
"""
|
ibm-et/defrag2015 | notebooks/doodle.ipynb | mit | import requests
api_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
url = "https://api.meetup.com/2/open_events"
params = {'topic':'bluemix', 'key':api_key}
r = requests.get(url, params=params)
r.raise_for_status()
resp = r.json()
resp.keys()
"""
Explanation: Upcoming Bluemix Meetups
http://www.meetup.com/meetup_api/docs/2/open_events/
End of explanation
"""
results = resp['results']
import pandas as pd
results[0]
"""
Explanation: All the data is in results so pull just that out for ease of reference.
End of explanation
"""
df = pd.DataFrame(results)
df.head(1)
df[['name', 'event_url', 'venue', 'yes_rsvp_count']]
"""
Explanation: Jamming it into a DataFrame to get the nice table layout. Nested objects will be "eh".
End of explanation
"""
|
csc-training/python-introduction | notebooks/answers/3 - Control Structures.ipynb | mit | breakfast = ["sausage", "eggs", "bacon", "spam"]
for item in breakfast:
print(item)
"""
Explanation: Control Structures
Simple for loop
Write a for loop which iterates over the list of breakfast items "sausage", "eggs", "bacon" and "spam" and prints out the name of item
End of explanation
"""
squares = []
for i in range(1, 10, 2):
squares.append(i**2)
print(squares)
"""
Explanation: Write then a for which loop determines the squares of the odd
integers up to 10. Use the range() function.
End of explanation
"""
fruits = {'banana' : 5, 'strawberry' : 7, 'pineapple' : 3}
for fruit in fruits:
print(fruit)
"""
Explanation: Looping through a dictionary
Write a loop that prints out the names of the fruits in the dictionary containing the fruit prices.
End of explanation
"""
sum = 0
for price in fruits.values():
sum += price
print(sum)
"""
Explanation: Next, write a loop that sums up the prices.
End of explanation
"""
f = [0, 1]
while True:
new = f[-1] + f[-2]
if new > 100:
break
f.append(new)
print(f)
"""
Explanation: While loop
Fibonacci numbers are a sequence of integers defined by the recurrence relation
F[n] = F[n-1] + F[n-2]
with the initial values F[0]=0, F[1]=1.
Create a list of Fibonacci numbers F[n] < 100 using a while loop.
End of explanation
"""
number = 7
if number < 0:
print("Negative")
elif number == 0:
print("Zero")
elif number in [3, 5, 7, 11, 17]:
print("Prime")
"""
Explanation: If - else
Write a control structure which checks whether an integer is
negative, zero, or belongs to the prime numbers 3,5,7,11,17
and perform e.g. corresponding print statement.
Use keyword in when checking for belonging to prime numbers.
End of explanation
"""
xys = [[2, 3], [0, -1], [4, -2], [1, 6]]
tmp = []
for x, y in xys:
tmp.append([y,x])
tmp.sort()
for i, (y,x) in enumerate(tmp):
xys[i] = [x,y]
print(xys)
"""
Explanation: Advanced exercises
Don't worry if you don't have time to finish all of these. They are not essential.
Looping through multidimensional lists
Start from a two dimensional list of (x,y) value pairs, and sort it according to y values. (Hint: you may need to create a temporary list).
End of explanation
"""
ys = []
for x, y in xys:
ys.append(y)
print(ys)
"""
Explanation: Next, create a new list containing only the sorted y values.
End of explanation
"""
sums = []
for x, y in xys:
if x > 0 and y > 0:
sums.append(x + y)
print(sums)
"""
Explanation: Finally, create a new list consisting of sums the (x,y) pairs where both x and y are positive.
End of explanation
"""
xys = [[2, 3], [0, -1], [4, -2], [1, 6]]
tmp = [[y, x] for x, y in xys]
tmp.sort()
xys = [[x, y] for y, x in tmp]
# One liner is possible but not very readable anymore:
xys = [[x, y] for y, x in sorted([[ytmp, xtmp] for xtmp, ytmp in xys])]
# Summing positives with one liner is ok:
sums = [x+y for x,y in xys if x > 0 and y > 0]
"""
Explanation: List comprehension is often convenient in this kind of situations:
End of explanation
"""
for number in range(1, 101):
if number % 3 == 0 and number % 5 == 0:
print("FizzBuzz")
elif number % 3 == 0:
print("Fizz")
elif number % 5 == 0:
print("Buzz")
print(number)
"""
Explanation: FizzBuzz
This is a classic job interview question. Depending on the interviewer or interviewee it can filter out up to 95% of the interviewees for a position. The task is not difficult but it's easy to make simple mistakes.
If a number is divisible by 3, instead of the number print "Fizz", if a number is divisible by 5, print "Buzz" and if the number is divisible by both 3 and 5, print "FizzBuzz".
End of explanation
"""
import random
while True:
value = random.random()
if value < 0.1:
break
print("done")
"""
Explanation: Food for thought: How do people commonly fail this test and why?
Breaking
The python random module generates pseudorandom numbers.
Write a while loop that runs until
the output of random.random() is below 0.1 and break when
the value is below 0.1.
End of explanation
"""
temperatures_celsius = [0, -15, 20.15, 13.3, -5.2]
temperatures_kelvin = [c+273.15 for c in temperatures_celsius]
"""
Explanation: List comprehension
Using a list comprehension create a new list, temperatures_kelvin from following Celsius temperatures and convert them by adding the value 273.15 to each.
End of explanation
"""
|
apryor6/apryor6.github.io | visualizations/seaborn/notebooks/.ipynb_checkpoints/colors-checkpoint.ipynb | mit | %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (20.0, 10.0)
df = pd.read_csv('../../../datasets/movie_metadata.csv')
df.head()
"""
Explanation: seaborn.countplot
Bar graphs are useful for displaying relationships between categorical data and at least one numerical variable. seaborn.countplot is a barplot where the dependent variable is the number of instances of each instance of the independent variable.
dataset: IMDB 5000 Movie Dataset
End of explanation
"""
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
# add an indicator whether a movie is short or long, split at 100 minutes runtime
df['islong'] = df.duration.transform(lambda x: int(x > 100))
# sort in descending order
#df = df.loc[df.groupby('Category').transform(sum).sort_values('Count', ascending=False).index]
df.head()
"""
Explanation: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
End of explanation
"""
p = sns.countplot(data=df, x = 'Category')
"""
Explanation: Basic plot
End of explanation
"""
p = sns.countplot(data=df,
x = 'Category',
hue = 'islong')
"""
Explanation: color by a category
End of explanation
"""
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong')
"""
Explanation: make plot horizontal
End of explanation
"""
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.5)
"""
Explanation: Saturation
End of explanation
"""
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'deep')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'muted')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'pastel')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'bright')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'dark')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'colorblind')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = ((50/255, 132/255.0, 191/255.0), (255/255.0, 232/255.0, 0/255.0)))
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'Dark2')
help(sns.color_palette)
help(sns.countplot)
p = sns.countplot(data=df, x = 'Category')
plt.text(9,2000, "Color Palettes", fontsize = 95, color='black', fontstyle='italic')
p.get_figure().savefig('../../figures/colors.png')
"""
Explanation: Various palettes
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II | 05_2D_acoustic_FD_modelling/lecture_notebooks/3_fdac2d_num_stability_anisotropy.ipynb | gpl-3.0 | # Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
"""
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
# Definition of modelling parameters
# ----------------------------------
xmax = 5000.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction (m)
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
tmax = 0.8 # maximum recording time of the seismogram (s)
dt = 0.0010 # time step
vp0 = 3000. # P-wave speed in medium (m/s)
# acquisition geometry
xr = 2000.0 # x-receiver position (m)
zr = xr # z-receiver position (m)
xsrc = 2500.0 # x-source position (m)
zsrc = xsrc # z-source position (m)
f0 = 20. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift (s)
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
@jit(nopython=True) # use Just-In-Time (JIT) Compilation for C-performance
def FD_2D_acoustic_JIT(dt,dx,dz,f0):
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
ir = (int)(xr/dx) # receiver location in grid in x-direction
jr = (int)(zr/dz) # receiver location in grid in z-direction
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Analytical solution
# -------------------
G = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinates in x-direction (m)
z = np.arange(nz)
z = z * dz # coordinates in z-direction (m)
# calculate source-receiver distance
r = np.sqrt((x[ir] - x[isrc])**2 + (z[jr] - z[jsrc])**2)
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - r / vp0) >= 0:
G[it] = 1. / (2 * np.pi * vp0**2) * (1. / np.sqrt(time[it]**2 - (r/vp0)**2))
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
# Initialize model (assume homogeneous model)
# -------------------------------------------
vp = np.zeros((nx,nz))
vp2 = np.zeros((nx,nz))
vp = vp + vp0 # initialize wave velocity in model
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# Output of Seismogram
# -----------------
seis[it] = p[ir,jr]
return time, seis, Gc, p # return last pressure wave field snapshot
"""
Explanation: Numerical stability, dispersion and anisotropy of the 2D acoustic finite difference modelling code
Similar to the 1D acoustic FD modelling code, we have to investigate the stability and dispersion of the 2D numerical scheme. Additionally, the numerical dispersion shows in the 2D case an anisotropic behaviour.
Let's begin with the CFL-stability criterion ...
CFL-stability criterion for the 2D acoustic FD modelling code
As for the 1D code, the maximum size of the timestep $dt$ is limited by the Courant-Friedrichs-Lewy (CFL) criterion:
\begin{equation}
dt \le \frac{dx}{\zeta v_{max}}, \nonumber
\end{equation}
where $dx$ denotes the spatial grid point distance and $v_{max}$ the maximum P-wave velocity. The factor $\zeta$ depends on the used FD-operators, dimension and numerical scheme.
As for the 1D case, we estimate the factor $\zeta$ by the von Neumann analysis, starting with the finite difference approximation of the 2D acoustic wave equation
\begin{equation}
\frac{p_{j,l}^{n+1} - 2 p_{j,l}^n + p_{j,l}^{n-1}}{\mathrm{d}t^2} \ = \ vp_{j,l}^2\biggl(\frac{p_{j,l+1}^{n} - 2 p_{j,l}^n + p_{j,l-1}^{n}}{\mathrm{d}x^2} + \frac{p_{j+1,l}^{n} - 2 p_{j,l}^n + p_{j-1,l}^{n}}{\mathrm{d}z^2}\biggr),
\end{equation}
and assuming harmonic plane wave solutions for the pressure wavefield of the form:
\begin{equation}
p = exp(i(k_x x + k_z z -\omega t)),\nonumber
\end{equation}
with $i^2=-1$, the wavenumbers $(k_x, k_z)$ in x-/z-direction, respectively, and the circular frequency $\omega$. Using a regular grid with
$dx = dz = dh,$
discrete spatial coordinates
$x_j = j dh,$
$z_l = l dh,$
and times
$t_n = n dt.$
we can calculate discrete plane wave solutions at the discrete locations and times in eq. (1):
\begin{align}
p_{j,l}^{n+1} &= exp(-i\omega dt)\; p_{j,l}^{n},\
p_{j,l}^{n-1} &= exp(i\omega dt)\; p_{j,l}^{n},\
p_{j+1,l}^{n} &= exp(ik_x dh)\; p_{j,l}^{n},\
p_{j-1,l}^{n} &= exp(-ik_x dh)\; p_{j,l}^{n},\
p_{j,l+1}^{n} &= exp(ik_z dh)\; p_{j,l}^{n},\
p_{j,l-1}^{n} &= exp(-ik_z dh)\; p_{j,l}^{n},\
\end{align}
Inserting eqs. (2) - (7) into eq. (1), division by $p_{j,l}^{n}$ and using the definition:
\begin{equation}
\cos(x) = \frac{exp(ix) + exp(-ix)}{2},\nonumber
\end{equation}
yields:
\begin{equation}
cos(\omega dt) - 1 = vp_{j,l}^2 \frac{dt^2}{dh^2}\biggl({cos(k_x dh) - 1} + {cos(k_z dh) - 1}\biggr).\nonumber
\end{equation}
Some further rearrangements and division of both sides by 2, leads to:
\begin{equation}
\frac{1 - cos(\omega dt)}{2} = vp_{j,l}^2 \frac{dt^2}{dh^2}\biggl(\frac{1 - cos(k_x dh)}{2} + \frac{1 - cos(k_z dh)}{2}\biggr).\nonumber
\end{equation}
With the relation
\begin{equation}
sin^2\biggl(\frac{x}{2}\biggr) = \frac{1-cos(x)}{2}, \nonumber
\end{equation}
we get
\begin{equation}
sin^2\biggl(\frac{\omega dt}{2}\biggr) = vp_{j,l}^2 \frac{dt^2}{dh^2}\biggl(sin^2\biggl(\frac{k_x dh}{2}\biggr)+sin^2\biggl(\frac{k_z dh}{2}\biggr)\biggr). \nonumber
\end{equation}
Taking the square root of both sides finally yields
\begin{equation}
sin\biggl(\frac{\omega dt}{2}\biggr) = vp_{j,l} \frac{dt}{dh}\sqrt{sin^2\biggl(\frac{k_x dh}{2}\biggr)+sin^2\biggl(\frac{k_z dh}{2}\biggr)}.
\end{equation}
This result implies, that if the Courant number
\begin{equation}
\epsilon = vp_{j,l} \frac{dt}{dh} \nonumber
\end{equation}
is larger than $1/\sqrt{2}$, you get only imaginary solutions, while the real part is zero (think for a second why). Consequently, the numerical scheme becomes unstable, when the following CFL-criterion is violated
\begin{equation}
\epsilon = vp_{j,l} \frac{dt}{dh} \le \frac{1}{\sqrt{2}} \nonumber
\end{equation}
Rearrangement to the time step dt, assuming that we have defined a spatial grid point distance dh and replacing $vp_{j,l}$ by the maximum P-wave velocity in the FD model $v_{max}$, leads to
\begin{equation}
dt \le \frac{dh}{\sqrt{2}v_{max}}. \nonumber
\end{equation}
Therefore, the factor $\zeta$ in the general CFL-criterion
\begin{equation}
dt \le \frac{dh}{\zeta vp_j}, \nonumber
\end{equation}
for the FD solution of the 2D acoustic wave equation using the temporal/spatial 3-point operator to approximate the 2nd derivative is $\zeta = \sqrt{2}$.
Let's check if this result ist correct:
End of explanation
"""
# Compare FD Seismogram with analytical solution
# ----------------------------------------------
def plot_seis(time,seis_FD,seis_analy):
# Define figure size
rcParams['figure.figsize'] = 12, 5
plt.plot(time, seis_FD, 'b-',lw=3,label="FD solution") # plot FD seismogram
plt.plot(time, seis_analy,'r--',lw=3,label="Analytical solution") # plot analytical solution
plt.xlim(time[0], time[-1])
plt.title('Seismogram')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
f0 = 20 # centre frequency of the source wavelet (Hz)
# define
zeta = np.sqrt(2)
# calculate dt according to the CFL-criterion
dt = dx / (zeta * vp0)
%time time, seis_FD, seis_analy, p = FD_2D_acoustic_JIT(dt,dx,dz,f0)
plot_seis(time,seis_FD,seis_analy)
"""
Explanation: To seperate modelling and visualization of the results, we introduce the following plotting function:
End of explanation
"""
fmax = 2 * f0
N_lam = vp0 / (dx * fmax)
print("N_lam = ",N_lam)
"""
Explanation: Numerical Grid Dispersion
While the FD solution above is stable, it is subject to some numerical dispersion when compared with the analytical solution. The grid point distance $dx = 10\; m$, P-wave velocity $vp = 3000\; m/s$ and a maximum frequency $f_{max} \approx 2*f_0 = 40\; Hz$ leads to ...
End of explanation
"""
N_lam = 12
dx = vp0 / (N_lam * fmax)
dz = dx # grid point distance in z-direction (m)
f0 = 20 # centre frequency of the source wavelet (Hz)
# define
zeta = np.sqrt(2)
# calculate dt according to the CFL-criterion
dt = dx / (zeta * vp0)
%time time, seis_FD, seis_analy, p = FD_2D_acoustic_JIT(dt,dx,dz,f0)
plot_seis(time,seis_FD,seis_analy)
"""
Explanation: $N_\lambda = 7.5$ gridpoints per minimum wavelength. Let's increase it to $N_\lambda = 12$, which yields ...
End of explanation
"""
# define dx/dz and calculate dt according to the CFL-criterion
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
# define zeta for the CFL criterion
zeta = np.sqrt(2)
dt = dx / (zeta * vp0)
f0 = 100
time, seis_FD, seis_analy, p = FD_2D_acoustic_JIT(dt,dx,dz,f0)
# Plot last pressure wavefield snapshot at Tmax = 0.8 s
# -----------------------------------------------------
rcParams['figure.figsize'] = 8, 8 # define figure size
clip = 1e-7 # image clipping
extent = [0.0, xmax/1000, 0.0, zmax/1000]
# Plot wavefield snapshot at tmax = 0.8 s
plt.imshow(p.T,interpolation='none',cmap='seismic',vmin=-clip,vmax=clip,extent=extent)
plt.title('Numerical anisotropy')
plt.xlabel('x (km)')
plt.ylabel('z (km)')
plt.show()
"""
Explanation: ... an improved fit of the 2D analytical by the FD solution.
Numerical Anisotropy
Compared to the 1D acoustic case, the numerical dispersion behaves a little bit differently in the 2D FD approximation. To illustrate this problem, we model the pressure wavefield for $t_{max} = 0.8\; s$for a fixed grid point distance of $dx = 10\;m$ and a centre frequency of the source wavelet $f0 = 100\; Hz$ which corresonds to $N_\lambda = 1.5$ grid points per minimum wavelength.
End of explanation
"""
|
kpyuan1776/mvv_analyse | graphCompDistances.ipynb | gpl-3.0 | import googlemaps
import json
from datetime import datetime
import networkx as nx
import csv
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# here is my API key from my project
gmaps = googlemaps.Client(key='PUT_GOOGLE_MAPS_API_KEY_HERE')
file_edges = 'edges_sbahn_fixed.txt'
file_verts = 'vertices_stations_fixed.txt'
file_edgesDir = 'edges_withDist.txt'
with open(file_verts, 'rb') as f:
verts = list(csv.reader(f))
with open(file_edges, 'rb') as f:
edges = list(csv.reader(f))
with open(file_edgesDir, 'rb') as f:
edgesDirs = list(csv.reader(f))
def compAdjacencyMat(edges,verts):
sizeAdj = (len(verts),len(verts))
adjacencyMat = np.zeros(sizeAdj)
#adjacencyMat = np.identity(len(verts))
for ii in range(len(edges)):
connect = edges[ii]
adjacencyMat[int(connect[0])-1][int(connect[1])-1] = 1.0
adjacencyMat[int(connect[1])-1][int(connect[0])-1] = 1.0
return adjacencyMat
#adds a third column of distance in m and a 4th of duration in s
def addDirectionsToEdges(edges,verts):
sizeEdgesDirs = (len(edges),len(edges[1])+2)
edges_dirs = np.zeros(sizeEdgesDirs)
for idx, ed in enumerate(edges):
#print(str(idx)+'\n')
current_time = datetime.now()
#current_time.replace(minute = current_time.minute)
edges_dirs[int(idx)][0:2] = ed[0:2]
distance_result = gmaps.distance_matrix(verts[int(ed[0])-1][2]+','+verts[int(ed[0])-1][3],
verts[int(ed[1])-1][2]+','+verts[int(ed[1])-1][3],
mode="transit",transit_mode='rail',traffic_model='best_guess',
departure_time=current_time)
edges_dirs[int(idx)][2] = distance_result[u'rows'][0][u'elements'][0][u'distance'][u'value']
edges_dirs[int(idx)][3] = distance_result[u'rows'][0][u'elements'][0][u'duration'][u'value']
return edges_dirs
def writeMatToFile(filename,mat):
f = open(filename, 'w')
for edgDist in mat:
linestr = str(edgDist[0]) + ', ' + str(edgDist[1]) + ', ' + str(edgDist[2]) + ', ' + str(edgDist[3])
print >> f, linestr
f.close()
def setWeights(G,edgDist):
for edge in edgDist:
v1 = int(float(edge[0]))-1
v2 = int(float(edge[1]))-1
G[v1][v2]['weight'] = int(float(edge[2]))
return G
def compDistOfShortPath(G,src,targ):
shortpath = nx.shortest_path(G,src,targ)
dist = 0;
for ii in range(len(shortpath)-1):
dist = dist + G[shortpath[ii]][shortpath[ii+1]]['weight']
return dist
def compAllCombination_Dist(G,verts):
sizeAdj = (len(verts),len(verts))
distMat = np.zeros(sizeAdj)
for ii in range(len(verts)):
for jj in range(len(verts)):
distMat[ii][jj] = compDistOfShortPath(G,ii,jj)
return distMat
"""
Explanation: analysis of munichs public transportation network (sbahn only)
loads vertices and edges from external text files (at the moment there is a bug in the edges:a connection is wrong)
it computes an adjacency matrix used to form a graph using networkx
the google direction Matrix API and the long/lat information is used to compute distances in [m]
End of explanation
"""
adjMat = compAdjacencyMat(edges,verts)
G = G=nx.from_numpy_matrix(adjMat)
nx.draw(G)
distMat = compAllCombination_Dist(G,verts)
# Plot it out
fig, ax = plt.subplots()
heatmap = ax.pcolor(distMat, cmap=plt.cm.Blues, alpha=0.8)
# Format
fig = plt.gcf()
fig.set_size_inches(25, 35)
# turn off the frame
ax.set_frame_on(False)
# put the major ticks at the middle of each cell
ax.set_yticks(np.arange(149) + 0.5, minor=False)
ax.set_xticks(np.arange(149) + 0.5, minor=False)
ax.invert_yaxis()
ax.xaxis.tick_top()
#form labels and put them on axis
labels = [None]*len(verts)
for ii in range(len(verts)):
labels[ii] = (verts[ii][1])
ax.set_xticklabels(labels, minor=False)
ax.set_yticklabels(labels, minor=False)
# rotate the
plt.xticks(rotation=90)
#ax.grid(False)
#legend
cbar = plt.colorbar(heatmap)
#cbar.ax.set_yticklabels(['0','1','2','>3'])
#cbar.set_label('# of contacts', rotation=270)
# Turn off all the ticks
ax = plt.gca()
"""
Explanation: compute adjacency matrix and form Graph
End of explanation
"""
current_time = datetime.now()
distance_res = gmaps.distance_matrix('48.216788, 11.174000',
'48.089545, 11.832342',
mode="transit",transit_mode='rail',traffic_model='best_guess',
departure_time=current_time)
distance_res
"""
Explanation: check results by comparing to distance from google distance_matrix
End of explanation
"""
|
gschivley/Index-variability | Notebooks/Compile population data.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import os
from os.path import join
cwd = os.getcwd()
data_directory = join(cwd, '..', 'Data storage')
"""
Explanation: Combine data files with state populations
The first data file has 2000-2010
End of explanation
"""
path = os.path.join(data_directory, 'Population data',
'st-est00int-alldata.csv')
pop1 = pd.read_csv(path)
pop1.head()
pop1 = pop1.loc[(pop1['SEX'] == 0) &
(pop1['ORIGIN'] == 0) &
(pop1['RACE'] == 0) &
(pop1['AGEGRP'] == 0), :]
# Column names for population estimate
est_cols = ['POPESTIMATE{}'.format(x) for x in range(2000, 2011)]
# Melt the wide-form data into a tidy dataframe
pop1_tidy = pd.melt(pop1, id_vars='NAME',
value_vars=est_cols, var_name='Year',
value_name='Population')
pop1_tidy.head()
def map_year(x):
'Return last 4 characters (the year)'
year = x[-4:]
return int(year)
pop1_tidy['Year'] = pop1_tidy['Year'].map(map_year)
"""
Explanation: Import 2000-2010 data
The sex, origin, race, and age columns are ALL when they have values of 0
Not clear if these are beginning or end of year values.
https://www2.census.gov/programs-surveys/popest/datasets/2000-2010/intercensal/state/
End of explanation
"""
pop1_tidy.loc[pop1_tidy['Year'] == 2010].head()
pop1_tidy.head()
pop1_tidy.tail()
pop1_tidy.columns = ['State', 'Year', 'Population']
"""
Explanation: The values shown below are ever slightly different than those listed in the later dataset.
End of explanation
"""
path = os.path.join(data_directory, 'Population data', 'nst-est2016-01.xlsx')
pop2 = pd.read_excel(path, header=3, parse_cols='A, D:J', skip_footer=7)
pop2.head()
pop2.tail()
drop_rows = ['Northeast', 'Midwest', 'South', 'West']
pop2.drop(drop_rows, inplace=True)
pop2.index = pop2.index.str.strip('.')
pop2.head()
pop2.columns
pop2_tidy = pd.melt(pop2.reset_index(), id_vars='index',
value_vars=range(2010, 2017), value_name='Population',
var_name='Year')
pop2_tidy.columns = ['State', 'Year', 'Population']
"""
Explanation: Import 2010-2016 data
https://www.census.gov/data/tables/2016/demo/popest/state-total.html
End of explanation
"""
pop_total = pd.concat([pop1_tidy, pop2_tidy])
"""
Explanation: Combine data
End of explanation
"""
pop_total.loc[pop_total['Year']==2010].sort_values('State')
pop_total = pd.concat([pop1_tidy.loc[pop1_tidy['Year'] < 2010], pop2_tidy])
pop_total.head()
pop_total.tail()
path = os.path.join('Data storage', 'Derived data', 'State population.csv')
pop_total.to_csv(path, index=False)
"""
Explanation: The overlapping 2010 values are different, but just barely. I'm going to re-combine the datasets and keep values from the second dataset.
End of explanation
"""
|
encima/Comp_Thinking_In_Python | Session_8/8_Recursion and Dicts.ipynb | mit | empty_dict = {}
contact_dict = {
"name": "Homer",
"email": "homer@simpsons.com",
"phone": 999
}
print(contact_dict)
"""
Explanation: Recursion and Dictionaries
Dr. Chris Gwilliams
gwilliamsc@cardiff.ac.uk
Overview
Scripts in Python
Types
Methods and Functions
Flow control
Lists
Iteration
for loops
while loops
Now
Dicts
Tuples
Iteration vs Recursion
Recursion
Dictionaries
Python has many different data structures available (see here)
The dictionary structure is similar to a list, but the index is specified by you.
It is also known as an associative array, where values are mapped to a key.
End of explanation
"""
print(contact_dict['email'])
"""
Explanation: This follows the format of JSON (JavaScript Object Notation).
Keys can be accessed the same way as lists:
End of explanation
"""
soft_acc = {
"post_code": "np20",
"floor": 3
}
for key in soft_acc:
print(key)
for key in soft_acc:
if len(key) > 5:
print(key)
"""
Explanation: Exercise
Create a dictionary with information about the software academy
Loop through it and print the values
Now use enumerate to print the key index
Modify the loop to only print the values where the length is > 5
End of explanation
"""
print(soft_acc.keys())
print(soft_acc.values())
print(soft_acc.items())
"""
Explanation: Keys and Values
Dictionaries have methods associated with them to access the keys, values or both from within the dict.
Exercise
Use the dir function (and the Python docs) on your soft_accc dict and write down the 3 methods that can be used to access the keys, values and both
End of explanation
"""
def find_first_int_value(dictionary):
for val in dictionary.values():
if type(val) is int:
return val
def find_first_int_key(dictionary):
for key, val in dictionary.items():
if type(key) is int:
return key
def example(dictionary):
for key, val in enumerate(dictionary):
if type(key) is int:
return key
find_first_int_value(soft_acc)
find_first_int_key(soft_acc)
example(dictionary)
"""
Explanation: Exercise
Using the methods you found, write a function that has a dictionary as an argument and loops through the values to return the first value that is of type int
Create a new functiom that does the same but returns the key of that value.
End of explanation
"""
students = {1234: "gary", 4567: "jen"}
print(students.get(1234))
gary = students.popitem() #how does this differ from pop?
print(gary)
print(students)
#pop gives a value but popitem gives you a tuple
students[gary[0]] = gary[1]
print(students)
print(students.pop(48789492, "Sorry, that student number does not exist"))
"""
Explanation: Accessing and Reassigning
With dicts, we can access keys through square bracket notation:
my_dict['key']
or through the get method:
mydict.get('key')
Removing Items
Much like lists, we can pop elements from a dict, but the way this is done is slightly different:
pop() - One must provide the key of the item to be removed and the value is returned. An error is given if nothing was found
popitem() - This works much like pop on a list, removing the last item in the dict and providing the key and the value.
Exercise
Create a dict of student numbers as keys and student names as values.
Print the third value in the dict using the get method
Choose any item in the list and pop it off and save it to a variable
Now add it back into the dict
Using the docs, explain the difference between pop and popitem return types
Using the docs, call the pop method for a key that does not exist, but make it return a string that reads, "Sorry, that student number does not exist"
End of explanation
"""
my_tuple = 1, 2
print(my_tuple)
new_tuple = 3, "a", 6
print(new_tuple)
print(new_tuple[1])
"""
Explanation: Tuples
We have just seen that tuples are a data structure in Python and that they are not as simple as lists or ints!
Tuples are like lists but: they are immutable!
We can access them like lists as well:
End of explanation
"""
my_tuple = 999, "Dave", "dave@dave.com", True, True
phone = my_tuple[0]
name = my_tuple[1]
email = my_tuple[2]
yawn = my_tuple[3]
still_not_done = my_tuple[4]
#unpacking tuples: number of names on left MUST match values in tuple
phone, name, email, yawn, still_not_done = my_tuple
"""
Explanation: Unpacking Tuples
Tuples can hold any number of values and it is easy enough to access them using square bracket notation.
But, we may receive a tuple that contains many values and this is no fun:
End of explanation
"""
students = {1234: "gary", 4567: "jen"}
print("1. {}".format(1234 in students))
print(1234 in students.keys())
print("jen" in students)
print("gary" in students.values())
print("gary" in students.items())
print((1234, "gary") in students.items())
"""
Explanation: Searching Dictionaries
Unlike lists, we are not just looking for values within dictionaries. Since we can specify our own keys, there are many times we may want to search this as well.
Exercise
Using the student dictionary from a few slides back, find the index of the student with the name "jen" using only the name and then the student number and then a tuple of the two.
Hint: Use the methods attached to dicts.
End of explanation
"""
strings = ['hey', 'a', 'you there', 'what to the what to the what', 'sup', 'oi oi', 'how are you doing, good sir?']
# strings_dict = {}
strings_dict = {33: 'dfhjkdshfjkhdsfjkhdskahfksahkjdfk', 18: 'dkjhskjhskdhffkdjh', 19: 'dfkjsdhfkjdhsfkjhdk', 5: 'kdkdf', 6: 'fdjhfd', 9: 'fkljrwlgj', 28: 'fdjfkdjfkljlskdfjlksdjflsk;a'}
# while True:
# msg = input("Please type your message here:")
# if msg is not 'q':
# strings_dict[len(msg)] = msg
# else:
# break
def list_to_dict(strings):
strings_dict = {}
for string in strings:
strings_dict[len(string)] = string
return strings_dict
def sort_list(input_list):
is_sorted = True
for key in range(0, len(input_list)):
for i in range(0, len(input_list)):
current = input_list[i]
if i + 1 < len(input_list):
if len(current) > len(input_list[i + 1]):
input_list[i] = input_list[i + 1]
input_list[i + 1] = current
return input_list
def mean_length(lengths):
total_length = 0
for length in lengths:
total_length += length
return total_length/len(lengths)
# strings_dict = list_to_dict(strings)
print("Average string length is {0}".format(mean_length(strings_dict.keys())))
sorted_list = sort_list(list(strings_dict.values()))
print("Sorted list is {0}".format(sorted_list))
"""
Explanation: Exercise
Write a script that:
1. Users can input messages until they type 'q'
2. Messages are added to a dictionary with the length of the message as the key
3. Write a function that uses the keys as the input to return the average length
4. Write a second function that takes the values of the dictionary and sorts them according to length
End of explanation
"""
def sum_until(x):
if x == 1:
return x
else:
return x + sum_until(x - 1)
print(sum_until(3))
"""
Explanation: Recursion
An iterative function is a function that loops to repeat a block of code.
A recursive function is a function that calls itself until a condition is met.
End of explanation
"""
def check_value(input_list, start):
if(start == len(input_list) - 1):
return
elif(input_list[start] > input_list[start+1]):
current = input_list[start]
input_list[start] = input_list[start + 1]
input_list[start + 1] = current
return input_list
l = [3,1,4]
print(check_value(l, 0))
"""
Explanation: What is Happening Here?
Python sees our call to the function and executes it.
Recursion Level
1
Does x equal 1? No
Return x + sum_until(2)
2
Does x equal 1? No
Return x + (2 + sum_until(1))
3
Does x equal 1? Yes
Return x (6)
Exercise
Write a function that takes a list as an input, a start index and checks if the value at that index is greater than the value at the next index. If it is more: swap them. Return the list.
HINT: You must make sure that the index + 1 must be less than the length of the list.
End of explanation
"""
#function receives list, start point and endpoint as args
def recursive_sort(input_list, index, end):
#if the startpoint goes beyond the endpoint then return
if index > end:
return(input_list)
#if the start point is equal to the end then decrement the end
if index == end:
recursive_sort(input_list, 0, end - 1)
# check if the string at index is longer than the string at index + 1
# replace it if it is
# why do we need a temporary variable?
elif len(input_list[index]) > len(input_list[index + 1]):
current = input_list[index]
print("Switching \"{0}\" at {1} for \"{2}\"".format(current, index, input_list[index + 1]))
input_list[index] = input_list[index + 1]
input_list[index + 1] = current
# call the function again and increment the index
recursive_sort(input_list, index + 1, end)
# Why do we need this here?
return input_list
strings = ['hey', 'a', 'you there', 'what to the what to the what', 'sup', 'oi oi', 'how are you doing, good sir?']
sorted_list = recursive_sort(strings, 0, len(strings)-1)
print(sorted_list)
#uncommented
def recursive_sort(input_list, index, end):
if index > end:
return(input_list)
if index == end:
recursive_sort(input_list, 0, end - 1)
elif len(input_list[index]) > len(input_list[index + 1]):
current = input_list[index]
print("Switching \"{0}\" at {1} for \"{2}\"".format(current, index, input_list[index + 1]))
input_list[index] = input_list[index + 1]
input_list[index + 1] = current
recursive_sort(input_list, index + 1, end)
return input_list
strings = ['hey', 'a', 'you there', 'what to the what to the what', 'sup', 'oi oi', 'how are you doing, good sir?']
sorted_list = recursive_sort(strings, 0, len(strings)-1)
print(sorted_list)
"""
Explanation: Exercise
Now modify the function to use an end index as an argument (which is the length of the list to begin with).
In your check for whether the start index is more than the length of the list, do the following things:
- call the function again, with the same list as the arguments
- the start index set to 0
- the end index decremented
- Before returning the original list, call the function again but increment the start index
- Add a check to return the list at the start of the function, if the start index is more than the end
End of explanation
"""
import timeit
def recursive_factorial(n):
if n == 1:
return 1
else:
return n * recursive_factorial(n-1)
def iterative_factorial(n):
x = 1
for each in range(1, n + 1):
x = x * each
return x
print("Timing runs for recursive approach: ")
%timeit for x in range(100): recursive_factorial(500)
print("Timing runs for iterative approach: ")
%timeit for x in range(100): iterative_factorial(500)
# print(timeit.repeat("factorial(10)",number=100000))
"""
Explanation: Bubble Sort
Congratulations, you just implemented your first sorting algorithm. You can find more information on the bubble sort here
Recursion vs. Iteration
You have seen both, which is better/faster/more optimal?
While recursive approaches are typically shorter and easier to read. However, it also results in slower code because of all the funtion calls it results in, as well as the risk of a stack overflow when too many calls are made.
Typically, math-based apparoaches will use recursion and most software engineering code will use iteration. Basically, most algorithms will use recursion so you need to understand how it works.
When Should You Use It?
Recursion is often seen as some mythical beast but the breakdown (as we have seen) is quite simple.
However, most (not all) languages are not tuned for recursion and, in performance terms, iteration is often vastly quicker.
End of explanation
"""
import random
def sort_numbers(s):
for i in range(1, len(s)):
val = s[i]
j = i - 1
while (j >= 0) and (s[j] > val):
s[j+1] = s[j]
j = j - 1
s[j+1] = val
# x = eval(input("Enter numbers to be sorted: "))
# x = list(range(0, 10)) #list(x)
x = random.sample(range(1, 1000000001), 100000000)
print(x)
sort_numbers(x)
print(x)
"""
Explanation: Why the Difference?
To understand this, we need to understand a little bit about how programs are run.
Two key things are the stack and the heap
Stack
Every time a function or a method is called, it is put on the stack to be executed. Recursion uses the stack extensively because each function calls itself (until some condition is met).
See the code below:
python
def recursive():
return recursive()
Running this will result in the recursive function being called an infinite number of times. The Python interpreter cannot handle this, so it will shut itself down and cause a stack overflow
Heap
The heap is the space for dynamic allocation of objects. The more objects created, the greater the heap. Although, this is dynamic and can grow as the application grows.
Python also takes care of this for us, by using a garbage collector. This tracks allocations of objects and cleans them up when they are no longer used. We can force things to be cleared by using:
del my_var
However, if assigning that variable takes up 50MB, Python may not always clear 50MB when it is deallocated. Why do you think this is?
FizzBuzz Exercise
Write a for loop that goes from 1 to 100 (inclusive) and prints:
* fizz if the number is a multiple of 3
* buzz if the number is a multiple of 5
* fizzbuzz if the number is a multiple of both 3 and 5
* the value for any other case
Exercise
Now turn this into a function and modify it to not use a for loop and use recursion. I.e. calling the function until the value reaches 100.
Homework
Insertion Sort
The insertion sort is a basic algorithm to build the sorted array in a similar way to the bubble sort.
The list is sorted by looping through all the elements from the index to the end, moving the index along for each loop.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/tensorflow_extended/labs/penguin_tfdv.ipynb | apache-2.0 | # Install the TensorFlow Extended library
!pip install -U tfx
"""
Explanation: Data validation using TFX Pipeline and TensorFlow Data Validation
Learning Objectives
Understand the data types, distributions, and other information (e.g., mean value, or number of uniques) about each feature.
Generate a preliminary schema that describes the data.
Identify anomalies and missing values in the data with respect to given schema.
Introduction
In this notebook, we will create and run TFX pipelines
to validate input data and create an ML model. This notebook is based on the
TFX pipeline we built in
Simple TFX Pipeline Tutorial.
If you have not read that tutorial yet, you should read it before proceeding
with this notebook.
In this notebook, we will create two TFX pipelines.
First, we will create a pipeline to analyze the dataset and generate a
preliminary schema of the given dataset. This pipeline will include two new
components, StatisticsGen and SchemaGen.
Once we have a proper schema of the data, we will create a pipeline to train
an ML classification model based on the pipeline from the previous tutorial.
In this pipeline, we will use the schema from the first pipeline and a
new component, ExampleValidator, to validate the input data.
The three new components, StatisticsGen, SchemaGen and ExampleValidator, are
TFX components for data analysis and validation, and they are implemented
using the
TensorFlow Data Validation library.
Please see
Understanding TFX Pipelines
to learn more about various concepts in TFX.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Install TFX
End of explanation
"""
# Load necessary libraries
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
"""
Explanation: Restart the kernel (Kernel > Restart kernel > Restart). Please ignore any incompatibility warnings and errors.
Check the TensorFlow and TFX versions.
End of explanation
"""
import os
# We will create two pipelines. One for schema generation and one for training.
SCHEMA_PIPELINE_NAME = "penguin-tfdv-schema"
PIPELINE_NAME = "penguin-tfdv"
# Output directory to store artifacts generated from the pipeline.
SCHEMA_PIPELINE_ROOT = os.path.join('pipelines', SCHEMA_PIPELINE_NAME)
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
SCHEMA_METADATA_PATH = os.path.join('metadata', SCHEMA_PIPELINE_NAME,
'metadata.db')
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
"""
Explanation: Set up variables
There are some variables used to define a pipeline. You can customize these
variables as you want. By default all output from the pipeline will be
generated under the current directory.
End of explanation
"""
import urllib.request
import tempfile
# Create a temporary directory.
DATA_ROOT = # TODO: Your code goes here
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
"""
Explanation: Prepare example data
We will download the example dataset for use in our TFX pipeline. The dataset
we are using is
Palmer Penguins dataset
which is also used in other
TFX examples.
There are four numeric features in this dataset:
culmen_length_mm
culmen_depth_mm
flipper_length_mm
body_mass_g
All features were already normalized to have range [0,1]. We will build a
classification model which predicts the species of penguins.
Because the TFX ExampleGen component reads inputs from a directory, we need
to create a directory and copy the dataset to it.
End of explanation
"""
# Print the first ten lines of the file
!head {_data_filepath}
"""
Explanation: Take a quick look at the CSV file.
End of explanation
"""
def _create_schema_pipeline(pipeline_name: str,
pipeline_root: str,
data_root: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Creates a pipeline for schema generation."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# NEW: Computes statistics over data for visualization and schema generation.
# TODO: Your code goes here
# NEW: Generates schema based on the generated statistics.
# TODO: Your code goes here
components = [
example_gen,
statistics_gen,
schema_gen,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
"""
Explanation: You should be able to see five feature columns. species is one of 0, 1 or 2,
and all other features should have values between 0 and 1. We will create a TFX
pipeline to analyze this dataset.
Generate a preliminary schema
TFX pipelines are defined using Python APIs. We will create a pipeline to
generate a schema from the input examples automatically. This schema can be
reviewed by a human and adjusted as needed. Once the schema is finalized it can
be used for training and example validation in later tasks.
In addition to CsvExampleGen which is used in
Simple TFX Pipeline Tutorial,
we will use StatisticsGen and SchemaGen:
StatisticsGen calculates
statistics for the dataset.
SchemaGen examines the
statistics and creates an initial data schema.
See the guides for each component or
TFX components tutorial
to learn more on these components.
Write a pipeline definition
We define a function to create a TFX pipeline. A Pipeline object
represents a TFX pipeline which can be run using one of pipeline
orchestration systems that TFX supports.
End of explanation
"""
# run the pipeline using Local TFX DAG runner
tfx.orchestration.LocalDagRunner().run(
_create_schema_pipeline(
pipeline_name=SCHEMA_PIPELINE_NAME,
pipeline_root=SCHEMA_PIPELINE_ROOT,
data_root=DATA_ROOT,
metadata_path=SCHEMA_METADATA_PATH))
"""
Explanation: Run the pipeline
We will use LocalDagRunner as in the previous tutorial.
End of explanation
"""
from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib
def get_latest_artifacts(metadata, pipeline_name, component_id):
"""Output artifacts of the latest run of the component."""
context = metadata.store.get_context_by_type_and_name(
'node', f'{pipeline_name}.{component_id}')
executions = metadata.store.get_executions_by_context(context.id)
latest_execution = max(executions,
key=lambda e:e.last_update_time_since_epoch)
return execution_lib.get_artifacts_dict(metadata, latest_execution.id,
[metadata_store_pb2.Event.OUTPUT])
# Non-public APIs, just for showcase.
from tfx.orchestration.experimental.interactive import visualizations
def visualize_artifacts(artifacts):
"""Visualizes artifacts using standard visualization modules."""
for artifact in artifacts:
visualization = visualizations.get_registry().get_visualization(
artifact.type_name)
if visualization:
visualization.display(artifact)
from tfx.orchestration.experimental.interactive import standard_visualizations
standard_visualizations.register_standard_visualizations()
"""
Explanation: You should see INFO:absl:Component SchemaGen is finished. if the pipeline
finished successfully.
We will examine the output of the pipeline to understand our dataset.
Review outputs of the pipeline
As explained in the previous tutorial, a TFX pipeline produces two kinds of
outputs, artifacts and a
metadata DB(MLMD) which contains
metadata of artifacts and pipeline executions. We defined the location of
these outputs in the above cells. By default, artifacts are stored under
the pipelines directory and metadata is stored as a sqlite database
under the metadata directory.
You can use MLMD APIs to locate these outputs programatically. First, we will
define some utility functions to search for the output artifacts that were just
produced.
End of explanation
"""
# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
SCHEMA_METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
# Find output artifacts from MLMD.
stat_gen_output = get_latest_artifacts(metadata_handler, SCHEMA_PIPELINE_NAME,
'StatisticsGen')
stats_artifacts = stat_gen_output[standard_component_specs.STATISTICS_KEY]
schema_gen_output = get_latest_artifacts(metadata_handler,
SCHEMA_PIPELINE_NAME, 'SchemaGen')
schema_artifacts = schema_gen_output[standard_component_specs.SCHEMA_KEY]
"""
Explanation: Now we can examine the outputs from the pipeline execution.
End of explanation
"""
# docs-infra: no-execute
visualize_artifacts(stats_artifacts)
"""
Explanation: It is time to examine the outputs from each component. As described above,
Tensorflow Data Validation(TFDV)
is used in StatisticsGen and SchemaGen, and TFDV also
provides visualization of the outputs from these components.
In this tutorial, we will use the visualization helper methods in TFX which
use TFDV internally to show the visualization.
Examine the output from StatisticsGen
End of explanation
"""
visualize_artifacts(schema_artifacts)
"""
Explanation: <!-- <img class="tfo-display-only-on-site"
src="images/penguin_tfdv/penguin_tfdv_statistics.png"/> -->
You can see various stats for the input data. These statistics are supplied to
SchemaGen to construct an initial schema of data automatically.
Examine the output from SchemaGen
End of explanation
"""
import shutil
_schema_filename = 'schema.pbtxt'
SCHEMA_PATH = 'schema'
os.makedirs(SCHEMA_PATH, exist_ok=True)
_generated_path = os.path.join(schema_artifacts[0].uri, _schema_filename)
# Copy the 'schema.pbtxt' file from the artifact uri to a predefined path.
shutil.copy(_generated_path, SCHEMA_PATH)
"""
Explanation: This schema is automatically inferred from the output of StatisticsGen. You
should be able to see 4 FLOAT features and 1 INT feature.
Export the schema for future use
We need to review and refine the generated schema. The reviewed schema needs
to be persisted to be used in subsequent pipelines for ML model training. In
other words, you might want to add the schema file to your version control
system for actual use cases. In this tutorial, we will just copy the schema
to a predefined filesystem path for simplicity.
End of explanation
"""
print(f'Schema at {SCHEMA_PATH}-----')
!cat {SCHEMA_PATH}/*
"""
Explanation: The schema file uses
Protocol Buffer text format
and an instance of
TensorFlow Metadata Schema proto.
End of explanation
"""
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# We don't need to specify _FEATURE_KEYS and _FEATURE_SPEC any more.
# Those information can be read from the given schema file.
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _build_keras_model(schema: schema_pb2.Schema) -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
# ++ Changed code: Uses all features in the schema except the label.
feature_keys = [f.name for f in schema.feature if f.name != _LABEL_KEY]
inputs = [keras.layers.Input(shape=(1,), name=f) for f in feature_keys]
# ++ End of the changed code.
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# ++ Changed code: Reads in schema file passed to the Trainer component.
schema = tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema_pb2.Schema())
# ++ End of the changed code.
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model(schema)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
"""
Explanation: You should be sure to review and possibly edit the schema definition as
needed. In this tutorial, we will just use the generated schema unchanged.
Validate input examples and train an ML model
We will go back to the pipeline that we created in
Simple TFX Pipeline Tutorial,
to train an ML model and use the generated schema for writing the model
training code.
We will also add an
ExampleValidator
component which will look for anomalies and missing values in the incoming
dataset with respect to the schema.
Write model training code
We need to write the model code as we did in
Simple TFX Pipeline Tutorial.
The model itself is the same as in the previous tutorial, but this time we will
use the schema generated from the previous pipeline instead of specifying
features manually. Most of the code was not changed. The only difference is
that we do not need to specify the names and types of features in this file.
Instead, we read them from the schema file.
End of explanation
"""
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
schema_path: str, module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Creates a pipeline using predefined schema with TFX."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Computes statistics over data for visualization and example validation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# NEW: Import the schema.
schema_importer = tfx.dsl.Importer(
source_uri=schema_path,
artifact_type=tfx.types.standard_artifacts.Schema).with_id(
'schema_importer')
# NEW: Performs anomaly detection based on statistics and data schema.
# TODO: Your code goes here
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'], # Pass the imported schema.
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
# NEW: Following three components were added to the pipeline.
statistics_gen,
schema_importer,
example_validator,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
"""
Explanation: Now you have completed all preparation steps to build a TFX pipeline for
model training.
Write a pipeline definition
We will add two new components, Importer and ExampleValidator. Importer
brings an external file into the TFX pipeline. In this case, it is a file
containing schema definition. ExampleValidator will examine
the input data and validate whether all input data conforms the data schema
we provided.
End of explanation
"""
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
schema_path=SCHEMA_PATH,
module_file=_trainer_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
"""
Explanation: Run the pipeline
End of explanation
"""
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
ev_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
'ExampleValidator')
anomalies_artifacts = ev_output[standard_component_specs.ANOMALIES_KEY]
"""
Explanation: You should see INFO:absl:Component Pusher is finished, if the pipeline
finished successfully.
Examine outputs of the pipeline
We have trained the classification model for penguins, and we also have
validated the input examples in the ExampleValidator component. We can analyze
the output from ExampleValidator as we did with the previous pipeline.
End of explanation
"""
visualize_artifacts(anomalies_artifacts)
"""
Explanation: ExampleAnomalies from the ExampleValidator can be visualized as well.
End of explanation
"""
|
ireapps/pycar | completed/analyzing_data_with_pandas_notebook_complete.ipynb | mit | import pandas as pd
"""
Explanation: Analyzing data with Pandas
First a little setup. Importing the pandas library as pd
End of explanation
"""
%matplotlib inline
pd.set_option("max_columns", 150)
pd.set_option('max_colwidth',40)
pd.options.display.float_format = '{:,.2f}'.format
"""
Explanation: Set some helpful display options. Uncomment the boilerplate in this cell.
End of explanation
"""
master = pd.read_csv('../project3/data/2017/Master.csv') # File with player details
salary = pd.read_csv('../project3/data/2017/Salaries.csv') #File with baseball players' salaries
"""
Explanation: open and read in the Master.csv and Salaries.csv tables in the data/2017/ directory
End of explanation
"""
master.info()
salary.info()
"""
Explanation: check to see what type each object is with print(table_name). You can also use the .info() method to explore the data's structure.
End of explanation
"""
master.head()
salary.head()
"""
Explanation: print out sample data for each table with table.head()<br>
see additional options by pressing tab after you type the head() method
End of explanation
"""
joined = pd.merge(left=master, right=salary, how="left")
"""
Explanation: Now we join the two csv's using pd.merge.<br>
We want to keep all the players names in the master data set<br>
even if their salary is missing from the salary data set.<br>
We can always filter the NaN values out later
End of explanation
"""
joined.info()
"""
Explanation: see what columns the joined table contains
End of explanation
"""
len(master) - len(joined)
"""
Explanation: check if all the players have a salary assigned. The easiest way is to deduct the length of the joined table from the master table
End of explanation
"""
joined["playerID"].value_counts()
"""
Explanation: Something went wrong. There are now more players in the joined data set than in the master data set.<br>
Some entries probably got duplicated<br>
Let's check if we have duplicate playerIDs by using .value_counts()
End of explanation
"""
joined[joined["playerID"] == "moyerja01"]
"""
Explanation: Yep, we do.<br>
Let's filter out an arbitrary player to see why there is duplication
End of explanation
"""
joined = joined.sort_values(["playerID","yearID"])
"""
Explanation: As we can see, there are now salaries in the dataset for each year of the players carreer.<br>
We only want to have the most recent salary though.<br>
We therefore need to 'deduplicate' the data set.
But first, let's make sure we get the newest year. We can do this by sorting the data on the newest entry
End of explanation
"""
deduplicated = joined.drop_duplicates("playerID", keep="last")
"""
Explanation: Now we deduplicate
End of explanation
"""
len(master) - len(deduplicated)
"""
Explanation: And let's do the check again
End of explanation
"""
deduplicated["salary"].describe()
"""
Explanation: Now we van get into the interesting part: analysis!
What is the average (mean, median, max, min) salary?
End of explanation
"""
max_salary = deduplicated["salary"].max()
deduplicated[deduplicated["salary"] == max_salary]
"""
Explanation: Who makes the most money?
End of explanation
"""
deduplicated.hist("salary")
"""
Explanation: What are the most common baseball players salaries?
Draw a histogram. <br>
End of explanation
"""
deduplicated.hist("yearID", bins=30)
"""
Explanation: We can do the same with the column yearID to see how recent our data is.<br>
We have 30 years in our data set, so we need to do some minor tweaking
End of explanation
"""
top_10_p = deduplicated["salary"].quantile(q=0.9)
top_10_p
"""
Explanation: Who are the top 10% highest-paid players?
calculate the 90 percentile cutoff
End of explanation
"""
best_paid = deduplicated[deduplicated["salary"] >= top_10_p]
best_paid
"""
Explanation: filter out players that make more money than the cutoff
End of explanation
"""
best_paid_top_10 = best_paid.nlargest(10, "salary")
best_paid_top_10
"""
Explanation: use the nlargest to see the top 10 best paid players
End of explanation
"""
best_paid_top_10.plot(kind="barh", x="nameLast", y="salary")
"""
Explanation: draw a chart
End of explanation
"""
best_paid.to_csv('highest-paid.csv', index=False)
"""
Explanation: save the data
End of explanation
"""
|
abatula/MachineLearningIntro | LinearRegression_Tutorial.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets # Import the linear regression function and dataset from scikit-learn
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import mean_squared_error, r2_score
# Print figures in the notebook
%matplotlib inline
"""
Explanation: Linear Regression Tutorial
Some problems don't have discrete (categorical) labels (e.g. color, plant species), but rather a continuous range of numbers (e.g. length, price). For these types of problems, regression is usually a good choice. Rather than predicting a categorical label for each example, it fits a continuous line (or plane, or curve) to the data in order to give a predicition as a number.
If you've ever found a "line of best fit" using Excel, you've already used regression!
Setup
Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), matplotlib.pyplot (for plotting figures), linear_model (for the scikit-learn linear regression algorithm), datasets (to download the Boston housing prices dataset from scikit-learn), and cross_validation (to create training and testing sets).
End of explanation
"""
# Import some data to play with
diabetes = datasets.load_diabetes()
# Store the labels (y), features (X), and feature names
y = diabetes.target # Labels are stored in y as numbers
X = diabetes.data
featureNames = diabetes.feature_names
print(diabetes.DESCR)
"""
Explanation: Import the dataset
Import the dataset and store it to a variable called diabetes. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target', 'data', 'feature_names']
The data features are stored in diabetes.data, where each row is an example from a single patient, and each column is a single feature. The feature names are stored in diabetes.feature_names. Target values are stored in diabetes.target.
Below, we load the labels into y, the data into X, and the names of the features into featureNames. We also print the description of the dataset.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
"""
Explanation: Create Training and Testing Sets
In order to see how well our classifier works, we need to divide our data into training and testing sets
End of explanation
"""
bmi_ind = 2
plt.scatter(X_train[:,bmi_ind], y_train)
plt.ylabel('Disease Progression')
plt.xlabel('Body Mass Index')
plt.show()
"""
Explanation: Visualize The Data
There are too many features to visualize the whole training set, but we can plot a single feature (e.g. body mass index) against the quantified disease progression.
Remember that the features have been normalized to center it around 0 and scaled by the standard deviation. So the values shown aren't the actual BMI levels, but a standardized version of them.
End of explanation
"""
regr = linear_model.LinearRegression()
x_train = X_train[:,bmi_ind].reshape(-1, 1) # regression expects a (#examples,#features) array shape
regr.fit(x_train, y_train)
"""
Explanation: Train A Toy Model
Next, we train a linear regression algorithm on our data.
In scikit learn, we use linear_model.LinearRegression() to create our model. Models are trained using a fit() method. For linear regression, fit() expects two arguments: the training examples X_train and the corresponding labels y_train.
To start, we will train a toy model using only the body mass index feature.
End of explanation
"""
bmi = [[-0.05],[0.1]]
predictions = regr.predict(bmi)
print(predictions)
"""
Explanation: Making Predictions
To make predictions on new data we use the predict() method. It expects a single input: an array-like object containing the features for the examples we want to predict.
Here, we get the predictions for two body mass index values: -.05 and 0.1.
End of explanation
"""
values = np.arange(-0.1, 0.15, 0.01).reshape(-1, 1) # Reshape so each feature is a separate row
plt.scatter(x_train, y_train)
plt.plot(values, regr.predict(values), c='r')
plt.ylabel('Disease Progression')
plt.xlabel('Body Mass Index')
plt.title('Regression Line on Training Data')
plt.show()
"""
Explanation: Visualize the Toy Model
Here we plot the linear regression line of our trained model on top of the data. We do this by predicting the output of the model on a range of values from -0.1 to 0.15. These predictions are plotted as a line on top of the training data.
This can't tell us how well it will perform on new, unseen, data, but it can show us how well the line fits the training data.
End of explanation
"""
x_test = X_test[:,bmi_ind][np.newaxis].T # regression expects a (#examples,#features) array shape
predictions = regr.predict(x_test)
plt.scatter(x_test, y_test)
plt.plot(x_test, predictions, c='r')
plt.ylabel('Disease Progression')
plt.xlabel('Body Mass Index')
plt.title('Regression Line on Test Data')
"""
Explanation: Test the Toy Model
Next, we will test the ability of our model to predict the disease progression in our test set, using only the body mass index.
First, we get our predictions for the training data, and plot the predicted model on top of the test data. Since we trained our model using only one feature, we need to get only that feature from the test set.
End of explanation
"""
mse = mean_squared_error(y_test, predictions)
print('The MSE is ' + '%.2f' % mse)
"""
Explanation: Evaluate the Toy Model
Next, we evaluate how well our model worked on the training dataset. Unlike with discrete classifiers (e.g. KNN, SVM), the number of examples it got "correct" isn't meaningful here. We certainly care if the predicted value is off by 100, but do we care as much if it is off by 1? What about 0.01?
There are many ways to evaluate a linear classifier, but one popular way is the mean-squared error, or MSE. As the name implies, you find the error for each example (the distance between the point and the predicted line), square each of them, and then add them all together.
Scikit-learn has a function that does this for you easily.
End of explanation
"""
r2score = r2_score(y_test, predictions)
print('The R^2 score is ' + '%.2f' % r2score)
"""
Explanation: The MSE isn't as intuitive as the accuracy of a discrete classifier, but it is highly useful for comparing the effectiveness of different models. Another option is to look at the $R^2$ score, which you may already be familiar with if you've ever fit a line to data in Excel. A value of 1.0 is a perfect predictor, while 0.0 means there is no correlation between the input and output of the regression model.
End of explanation
"""
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
predictions = regr.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print('The MSE is ' + '%.2f' % mse)
r2score = r2_score(y_test, predictions)
print('The R^2 score is ' + '%.2f' % r2score)
"""
Explanation: Train A Model on All Features
Next we will train a model on all of the available features and use it to predict the progression of diabetes after a year. We can then see how this compares to using only a single feature.
End of explanation
"""
# Create our regression models
regr_1 = linear_model.LinearRegression()
regr_all = linear_model.LinearRegression()
# Create arrays to store the MSE and R^2 score
mse_1 = []
r2score_1 = []
mse_all = []
r2score_all = []
# Loop through 5 folds
kf = KFold(n_splits=5)
for trainInd, valInd in kf.split(X_train):
X_tr = X_train[trainInd,:]
y_tr = y_train[trainInd]
X_val = X_train[valInd,:]
y_val = y_train[valInd]
# Train our models
regr_1.fit(X_tr[:,bmi_ind].reshape(-1, 1), y_tr) # Train on only one feature
regr_all.fit(X_tr, y_tr) # Train on all features
# Make our predictions
pred_1 = regr_1.predict(X_val[:,bmi_ind].reshape(-1, 1))
pred_all = regr_all.predict(X_val)
# Calculate the MSE
mse_1.append(mean_squared_error(y_val, pred_1))
mse_all.append(mean_squared_error(y_val, pred_all))
# Calculate the R^2 score
r2score_1.append(r2_score(y_val, pred_1))
r2score_all.append(r2_score(y_val, pred_all))
"""
Explanation: Using Crossvalidation
To properly compare these two models, we need to use crossvalidation to select the best model. We can then get our final result using our test data.
First we create our two linear regression models, then we divide our training data into folds. We loop through the sets of training and validation folds. Each time, we train each model on the training data and evaluate on the validation data. We store the MSE and $R^2$ score of each classifier on each fold so we can look at them later.
End of explanation
"""
print('One Feature:\nMSE: ' + '%.2f' % np.mean(mse_1))
print('R^2: ' + '%.2f' % np.mean(r2score_1))
print('\nAll Features:\nMSE: ' + '%.2f' % np.mean(mse_all))
print('R^2: ' + '%.2f' % np.mean(r2score_all))
"""
Explanation: Select a Model
To select a model, we look at the average $R^2$ score and MSE across all folds.
End of explanation
"""
regr_all.fit(X_train, y_train)
predictions = regr_all.predict(X_test)
mse = mean_squared_error(y_test, predictions)
r2score = r2_score(y_test, predictions)
print('The final MSE is ' + '%.2f' % mse)
print('The final R^2 score is ' + '%.2f' % r2score)
"""
Explanation: Final Evaluation
The model using all features has a higher $R^2$ score and lower MSE, so we select it as our best model. Now we can evaluate it on our training set and get our final MSE an $R^2$ score values.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
"""
Explanation: Sidenote: Randomness and Results
Every time you run this notebook, you will get slightly different results. Why? Because data is randomly divided among the training/testing/validation data sets. Running the code again will create a different division of the data, and will make the results slightly different. However, the overall outcome should remain consistent and have approximately the same values. If you have drastically different results when running an analysis multiple times, it suggests a problem with your model or that you need more data.
If it's important that you get the exact same results every time you run the code, you can specify a random state in the random_state argument of train_test_split() and KFold.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_visualize_evoked.ipynb | bsd-3-clause | import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
# sphinx_gallery_thumbnail_number = 9
"""
Explanation: Visualize Evoked data
In this tutorial we focus on plotting functions of :class:mne.Evoked.
End of explanation
"""
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
"""
Explanation: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
End of explanation
"""
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
"""
Explanation: Notice that evoked is a list of :class:evoked <mne.Evoked> instances.
You can read only one of the categories by passing the argument condition
to :func:mne.read_evokeds. To make things more simple for this tutorial, we
read each instance to a variable.
End of explanation
"""
fig = evoked_l_aud.plot(exclude=())
"""
Explanation: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
End of explanation
"""
fig.tight_layout()
"""
Explanation: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
End of explanation
"""
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks)
"""
Explanation: Now let's make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with :func:mne.pick_types. See also :func:mne.pick_channels and
:func:mne.pick_channels_regexp.
Using spatial_colors=True, the individual channel lines are color coded
to show the sensor positions - specifically, the x, y, and z locations of
the sensors are transformed into R, G and B values.
End of explanation
"""
evoked_l_aud.plot_topomap()
"""
Explanation: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
End of explanation
"""
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag')
"""
Explanation: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
End of explanation
"""
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag')
"""
Explanation: Or we can automatically select the peaks.
End of explanation
"""
fig, ax = plt.subplots(1, 5, figsize=(8, 2))
kwargs = dict(times=0.1, show=False, vmin=-300, vmax=300)
evoked_l_aud.plot_topomap(axes=ax[0], colorbar=True, **kwargs)
evoked_r_aud.plot_topomap(axes=ax[1], colorbar=False, **kwargs)
evoked_l_vis.plot_topomap(axes=ax[2], colorbar=False, **kwargs)
evoked_r_vis.plot_topomap(axes=ax[3], colorbar=False, **kwargs)
for ax, title in zip(ax[:4], ['Aud/L', 'Aud/R', 'Vis/L', 'Vis/R']):
ax.set_title(title)
plt.show()
"""
Explanation: You can take a look at the documentation of :func:mne.Evoked.plot_topomap
or simply write evoked_r_aud.plot_topomap? in your python console to
see the different parameters you can pass to this function. Most of the
plotting functions also accept axes parameter. With that, you can
customise your plots even further. First we create a set of matplotlib
axes in a single figure and plot all of our evoked categories next to each
other.
End of explanation
"""
ts_args = dict(gfp=True)
topomap_args = dict(sensors=False)
evoked_r_aud.plot_joint(title='right auditory', times=[.09, .20],
ts_args=ts_args, topomap_args=topomap_args)
"""
Explanation: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http://matplotlib.org/users/shell.html for
more information.
We can combine the two kinds of plots in one figure using the
:func:mne.Evoked.plot_joint method of Evoked objects. Called as-is
(evoked.plot_joint()), this function should give an informative display
of spatio-temporal dynamics.
You can directly style the time series part and the topomap part of the plot
using the topomap_args and ts_args parameters. You can pass key-value
pairs as a python dictionary. These are then passed as parameters to the
topomaps (:func:mne.Evoked.plot_topomap) and time series
(:func:mne.Evoked.plot) of the joint plot.
For an example of specific styling using these topomap_args and
ts_args arguments, here, topomaps at specific time points
(90 and 200 ms) are shown, sensors are not plotted (via an argument
forwarded to plot_topomap), and the Global Field Power is shown:
End of explanation
"""
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick, colors=colors,
linestyles=linestyles)
"""
Explanation: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function :func:mne.viz.plot_compare_evokeds. The easiest way is to create
a Python dictionary, where the keys are condition names and the values are
:class:mne.Evoked objects. If you provide lists of :class:mne.Evoked
objects, such as those for multiple subjects, the grand average is plotted,
along with a confidence interval band - this can be used to contrast
conditions for a whole experiment.
First, we load in the evoked objects into a dictionary, setting the keys to
'/'-separated tags (as we can do with event_ids for epochs). Then, we plot
with :func:mne.viz.plot_compare_evokeds.
The plot is styled with dictionary arguments, again using "/"-separated tags.
We plot a MEG channel with a strong auditory response.
End of explanation
"""
evoked_r_aud.plot_image(picks=picks)
"""
Explanation: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
End of explanation
"""
title = 'MNE sample data\n(condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment,
background_color='k', color=['white'])
mne.viz.plot_evoked_topo(evoked, title=title % 'Left/Right Auditory/Visual',
background_color='w')
"""
Explanation: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
End of explanation
"""
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
"""
Explanation: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to MEG helmet
and scalp surface.
To do this we'll need coregistration information. See
tut_forward for more details.
Here we just illustrate usage.
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2020-11-05-ways-of-python.ipynb | mit | from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
import IPython
"""
Explanation: Unlike other programs that have a single programming interface (matlab) or a dominant interface de jour (R with RStudio), Python has a whole ecosystem of programs for writing it. This can be confusing at first, with so much choice, what should you use for your project?
This presentation will cover some of the most popular Python interfaces, their pros and cons, and some situations in which one may be preferable to another. We will also discuss some operational details of the Anaconda package management system.
You can see a recording of this presentation here
End of explanation
"""
# Demo some jupyter stuff here
"""
Explanation: 1. Jupyter notebooks
These are our primary teaching tool.
Pros
Web based interface, easy to maintain
Inline figures and markdown cells make great workbooks
Encourages self documenting code
"magic" functions to interact with operating system
Can share interactive notebooks online, e.g. via Binder
Cons
Harder to automate the scripts
Makes a mess in git
Requires a GUI to run/efficiently examine the notebooks
Also check out jupyterlab. This is the new standard for jupyter. Much more powerful and integrated. All projects written in notebooks can be continued in lab with no changes needed
End of explanation
"""
# Demo an IDE, including code hints and autocompletion
"""
Explanation: 2. Integrated Development Environments (IDEs)
These are full featured tools for code development. Spyder is very popular among scientists. Especially if you are coming from a matlab or RStudio background, the appearance of this IDE is very familiar and comforting. The whole thing isitself made in Python which is pretty cool
Pycharm is like Spyder on steriods
Pros
See variables, file system, command line and code at a glance
Loads of plugins (especially Pycharm)
Smart autocompletion
Code highlighting for e.g. unused imports, missing whitespace
Can handle outside programs like git
Cons
Heavy on OS resources, especially RAM
Can be slow to start
End of explanation
"""
# Python command line demo
"""
Explanation: 3. Python fresh from the command line
Just open up a Python prompt and start coding
This is a farily rare use case unless you are doing something very short. However, it's good to remember that this is availble. On pretty much any unix system (Linux, Mac etc) you can get straight to Python from the command line. This can be useful if you're logged in to a remote server and need to execute some Python in a hurry.
If you're writing more than a couple of lines however, you'll want to write some .py files and run them
End of explanation
"""
# Show content of a python script with syntax highlighting. Shamelessly copied from jgosmann's answer on
# stackoverflow.com/questions/19197931/how-to-show-as-output-cell-the-contents-of-a-py-file-with-syntax-highlighting
with open('halloween_sysargv.py') as f:
code = f.read()
formatter = HtmlFormatter()
IPython.display.HTML('<style type="text/css">{}</style>{}'.format(
formatter.get_style_defs('.highlight'), highlight(code, PythonLexer(), HtmlFormatter())))
"""
Explanation: 4. Python in files
You can write Python in any text editor program. On UNIX systems vim and emacs remain popular after several decades. Atom is a more user friendly GUI based option. Windows users can try notepad++ for Python support
Pros
Simple and lightweight
Always there for you (especially vim)
Super portable scripts
Easy to automate with tools like cron
Cons
Limited autocompletion and error checking
No easy way to check workspace (variables, path etc)
Working with figures can be difficult (need to save to file and display)
Providing inputs to Python scripts run from the command line
There are different ways to turn your Python program (.py) into a commandline tool. We will demonstrate two of these options below.
sys.argv
The sys module is part of the standard Python library and contains functions to access and modify variables of the Python runtime environment. In this tutorial, we're only demonstrating one of its functions: sys.argv.
Let's look at the contents of a python script called halloween_sysargv.py below. It is a very simple demonstration of how to provide numerical, string (for example filenames!) or list inputs to a python program.
End of explanation
"""
! python3 halloween_sysargv.py 13 pumpkin cat,bat,spider # the exlamation mark tells Jupyter we're running a shell command.
"""
Explanation: Now we can run this script called halloween.py in the shell as follows:
End of explanation
"""
# Show content of a python script with syntax highlighting. Shamelessly copied from jgosmann's answer on
# stackoverflow.com/questions/19197931/how-to-show-as-output-cell-the-contents-of-a-py-file-with-syntax-highlighting
with open('halloween_argparse.py') as f:
code = f.read()
IPython.display.HTML('<style type="text/css">{}</style>{}'.format(
formatter.get_style_defs('.highlight'), highlight(code, PythonLexer(), HtmlFormatter())))
"""
Explanation: So, everything after python3 halloween.py ends up as a string in a list returned by sys.argv. The first element of sys.argv is always the name of the program that is being run.
argparse
With argparse, you can easily supply your Python program with input from commandline in a more user friendly way. Inputs are supplied to your python program in the following format:
python myprogram.py -a avalue -b bvalue --option-c cvalue -f
The predecessor of argparse is optparse.
Content of halloween_argparse.py:
End of explanation
"""
! python3 halloween_argparse.py -n 13 --animals=cat,bat,spider,wolf -c # ! running in shell
"""
Explanation: In the terminal, we can provide inputs using the flags we specified:
End of explanation
"""
! python halloween_argparse.py --help # ! running in shell
"""
Explanation: One of the advantages of argparse is that a help function is automatically generated from the "help" argument you supply when adding options:
End of explanation
"""
HTML(html)
"""
Explanation: See the documentation and tutorial to find out what else you can do with argparse.
5. Python on the HPC
Depending on your research, your data and your computer, you may want to consider running some or most of your analyses and experiments on a High Performance Computer (HPC). While the HPC is running your Python programs, your own machine is not burdened, so you can freely use it for other tasks or shut it off.
UEA has its own HPC for research: the new ADA Cluster.
This provides me with an excellent excuse to insert an image of 19th century visionary Ada Lovelace.
For more introduction on high performance computing and ADA, please see the UEA Research and Specialist Computing Support help pages. The HPC Team offers to meet with all new users to help you get started.
You can use Conda to manage Python environments on ADA. Information on how to build and activate conda python environments on ADA can be found here.
On a HPC, you can either work interactively or submit batch jobs.
When submitting batch jobs (after code development and testing locally or in an interactive session), only the fourth way of Python above is available to you. Providing inputs from the command line will come in handy when submitting (array) jobs. Note that in batch jobs, you need to activate conda environments with source activate myenv instead of the otherwise recommended conda activate myenv.
In an interactive session, the recommended ways to work with Python on ADA are options 3 and 4 from above (from the UEA HPC team: "Jupyter Notebooks and IDEs rely on graphical interfaces that have high overheads and therefore generally don't work well on a cluster environment"). The file editors available on ADA are nano, nedit, emacs, Vi and gvim.
Anaconda
If you are not already familiar with Anaconda, it is a distribution of Python geared toward data scientists that aims to make it quick and easy to manage multiple projects with differing dependencies.
With Anaconda you can maintain seperate environments for all your projects.
Why would you want to do this? Different projects require different packages, and not all of these packages are able to interoperate. Particularly in science, we often need to use legacy software dependant on older modules. If you want to work on one project built in Python 2.7 and your new stuff in 3.8, you'll need to keep them seperate on your system so they don't interfere with each other. Anaconda is a very user friendly way to acheive this.
The key to anaconda is environments. These are collections of Python modules, non Python programs (like jupyter notebooks, GDAL or Spyder) and a specific version of Python itself. There is no limit to the number of environments you can have. The only requirement is that each one has a unique name on your system.
Here's an example environment from our PPD Python course
yml
name: ppd_python
channels:
- defaults
- conda-forge
dependencies:
- python=3.8
- ipython
- jupyter
- numpy
- matplotlib
- pandas
- cartopy
- xarray
- netcdf4
- seaborn
- spyder
- tqdm
- scipy
- iris
- plotly
- cftime
The environment is created from a textfile. You need to specify a names, sources and the modules (dependencies) you need. In this case we specified Python=3.8, jupyter to run notebooks and a bunch of modules including numpy, matplotlib and scipy. This should be all anyone needs to replicate the same environment on their machine and run the scripts succesfully. If you are sharing code with others, always include an environment file so it runs correctly.
We will do a more detailed demo of package management with Anaconda in the future
How I start a Python project
*Other Hosting Services Are Available
Reading
If you want a good science environment file to start from, try the one from ppd_python. You'll find some handy conda instruction in the repo description. Click to download the zip You want the environment.yml file. The environment is based on Python 3.8 which will be supported until October 2024
A solid intro to git by Software Carpentry
A cool trick with conda for bash users by Leo Uieda. N.B. conda activate is preferred to source activate these days.
Sources
Python on the ADA HPC
Images
Conda image: https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/applications/conda/
Ada Lovelace: https://blogs.scientificamerican.com/observations/ada-lovelace-day-honors-the-first-computer-programmer/
Flow chart made with graphviz
End of explanation
"""
|
marcelomiky/PythonCodes | codes/back_to_basics/conceitos/Lists.ipynb | mit | list1 = ['apple', 'banana', 'orange']
list1
list2 = [7, 11, 13, 17, 19]
list2
list3 = ['text', 23, 66, -1, [0, 1]]
list3
empty = []
empty
list1[0]
list1[-1]
list1[-2]
'orange' in list1
'pineapple' in list1
0 in list3
0 in list3[-1]
None in empty
66 in list3
len(list2)
len(list3)
del list2[2]
list2
new_list = list1 + list2
new_list
new_list * 2
[new_list] * 3
list2
list2.append(23)
list2
#list.insert(index, obj)
list2.insert(0, 5)
list2
list2.insert(6, 29)
list2
list2.insert(len(list2), 101)
list2
list2.insert(-1, 999) #it doesn't insert in the last position! :/
list2
list2.remove(999) #remove the item 999
list2
list2.remove(list2[-1]) #here removed the last one!
list2
l1 = [1, 2]
l2 = [3, 4]
l1.extend(l2)
l1
l2
list_unsorted = [10, 3, 7, 12, 1, 20]
list_unsorted.sort()
list_unsorted
list_unsorted.sort(reverse = True) # L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*
list_unsorted
l1
l2
l2 = l1.copy()
l2
l2 == l1
id(l1)
id(l2)
# It's better use copy()! Look:
l1 = ['a', 'b', 'c']
l2 = l1
l1
l2
l2.append('d')
l2
l1
l1 == l2
id(l1)
id(l2) # same id! watch it.
# Slicing
list10 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
list10[4:] #took the 4th and goes on
list10[4:6] #list[start:end] #item starts through end-1
list10[-2:] # the last 2 items in the array
list10[:-2] # everthing except the last 2 items
# sliceable[start:stop:step]
list10[::2] # os pares (pula de 2 em 2)
list10[1::2] # os ímpares
# and if I wanna know the index position of an item?
list10
list10.index(6)
list20 = [1, 2, 1, 3, 1, 4, 1, 5]
list20.count(1) # it counts the number of times the "1" appears in the list
list20.count(4)
list20.count(10)
list20.pop()
list20
list20.sort(reverse=True)
list20
max(list20)
sum(list20)
courses = ['History', 'Math', 'Physics', 'CompSci']
for index, course in enumerate(courses):
print(index, course)
course_str = ', '.join(courses)
print(course_str)
new_list = course_str.split(', ')
new_list
cs_courses = ['History', 'Math', 'Physics', 'CompSci']
art_courses = ['History', 'Math', 'Art', 'Design']
list(set(cs_courses) & set(art_courses))
set(cs_courses).intersection(art_courses)
set(cs_courses).difference(art_courses) # there's only in cs_courses
set(art_courses).difference(cs_courses) #there's only in art_courses
set(cs_courses).union(art_courses) # the two lists together
"""
Explanation: Lists
<img src= img/aboutDataStructures.png>
End of explanation
"""
x = ['Patton', 'Zorn', 'Hancock']
a, b, c = x
a
b, c
"""
Explanation: Unpacking
End of explanation
"""
# Examples from https://docs.python.org/3/tutorial/datastructures.html
squares = []
for x in range(10):
squares.append(x ** 2)
squares
squares2 = [x ** 2 for x in range(10)]
squares2
new_range = [i * i for i in range(5) if i % 2 == 0]
new_range
"""
Explanation: List Comprehensions
End of explanation
"""
combs = []
for x in [1, 2, 3]:
for y in [3, 1, 5]:
if x != y:
combs.append((x, y))
combs
# and in list comprehension format:
combs2 = [(x, y) for x in [1, 2, 3] for y in [3, 1, 5] if x != y]
combs2
combs == combs2
vec = [-4, -2, 0, 2, 4]
[x*2 for x in vec]
# exclude negative numbers:
[x for x in vec if x >= 0]
# absolute in all numbers:
[abs(x) for x in vec]
freshfruit = [' banana ', 'passion fruit ']
[weapon.strip() for weapon in freshfruit] # strip() removes the white spaces at the start and end, including spaces, tabs, newlines and carriage returns
# list of 2-tuples like (number, square)
[(y, y**2) for y in range(6)]
# flatten a list using listcomp with two 'for'
vec = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
[num for elem in vec for num in elem]
# in another words:
vec = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
new_list = []
for elem in vec:
for num in elem:
new_list.append(num)
new_list
from math import pi
[str(round(pi, i)) for i in range(1, 6)]
# https://en.wikipedia.org/wiki/List_comprehension
s = {v for v in 'ABCDABCD' if v not in 'CB'}
print(s)
s
type(s)
s = {key: val for key, val in enumerate('ABCD') if val not in 'CB'}
s
# regular list comprehension
a = [(x, y) for x in range(1, 6) for y in range(3, 6)]
a
# in another words:
b = []
for x in range(1, 6):
for y in range(3, 6):
b.append((x,y))
b
a == b
# parallel/zipped list comprehension
c = [x for x in zip(range(1, 6), range(3, 6))]
c
# http://www.pythonforbeginners.com/basics/list-comprehensions-in-python
listOfWords = ['this', 'is', 'a', 'list', 'of', 'words']
[word[0] for word in listOfWords]
# or...
listOfWords = ["this","is","a","list","of","words"]
items = [word[0] for word in listOfWords]
items
[x.lower() for x in ['A', 'B', 'C']]
[x.upper() for x in ['a', 'b', 'c']]
string = 'Hello 12345 World'
numbers = [x for x in string if x.isdigit()]
numbers
another_string = 'Hello 12345 World'
just_text = [x for x in another_string if x.isalpha()]
just_text
just_text = [just_text[:5], just_text[5:]]
just_text
part1 = ''.join(just_text[0])
part1
part2 = ''.join(just_text[1])
part2
# finally...
just_text = [part1, part2]
just_text
"""
Explanation: A list comprehension consists of brackets containing an expression followed by a for clause, then zero or more for or if clauses.
End of explanation
"""
|
LSSTDESC/Monitor | examples/simple_error_model.ipynb | bsd-3-clause | import os
import desc.monitor
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from lsst.sims.photUtils import calcNeff
%matplotlib inline
%load_ext autoreload
%autoreload 2
"""
Explanation: Simple Error Model for Twinkles
This notebook will calculate a simple error model for the Twinkles data as a function of the seeing and depth.
Requirements
You will need the DESC Monitor and its dependencies.
You will also need the truth database and OpSim database for the desired Twinkles run. This example uses the run 1.1 settings and thus we use kraken_1042 as our OpSim database, but further investigation will be done when 1.3 results are ready and we will move to the minion_1016 database. LSST OpSim databases can be downloaded here.
Motivation for an Error Model
The Error Model will be a way to estimate the uncertainties in the Twinkles data from the properties of a visit and an object. There are three major motivations behind this:
Develop a method to rapidly simulate large amounts of catalog level datasets for time-variable astronomical objects without the need for image simulation.
Understand the uncertainties that will be present in observed data to improve the likelihoods that we will use for Bayesian estimates of other properties such as the time delays for strongly lensed AGN or the detections of Supernovae.
Look for evidence of systematic errors related to image processing. Also look for major differences between OpSim expected values and those calculated from PhoSim images. Comparisons between OpSim and PhoSim will be useful for other simulation tasks throughout DESC.
So far, we have worked on modeling the relationships between flux uncertainties and the observed seeing and 5 sigma depth of a visit.
NOTE: As mentioned above the current analysis in this notebook is done using Twinkles Run 1.1. This notebook will be updated with our latest findings when Run 1.3 is ready.
Import necessary modules
Don't worry about the warnings below.
End of explanation
"""
star_db_name = '../../twinkles_run1.1.db'
truth_dbConn = desc.monitor.TruthDBInterface(database=star_db_name, driver='sqlite')
"""
Explanation: Load necessary database connections.
The Monitor has the ability to load the true fluxes and the observed fluxes (stored in MySQL databases on NERSC) for us. If you are not running this on jupyter-dev you will have to open an ssh-tunnel as described in the Monitor package setup.
Let's start by specifying the location of our truth database and setting up a truth database connection.
Since we are using the same field and chip in run 1.1 and 1.3 we can use the run 1.1 stars database without issues when 1.3 is ready.
End of explanation
"""
twinkles_dbConn = desc.monitor.DBInterface(database='DESC_Twinkles_Level_2',
#host='127.0.0.1', port='3307', ##if not running jupyter-dev
host='scidb1.nersc.gov', port=3306,
driver='mysql', project='Twinkles Run1.1')
"""
Explanation: Then we'll establish a database connection to the NERSC MySQL database for the observed data from Twinkles.
End of explanation
"""
opsim_dbConn = desc.monitor.OpsimDBInterface('../../kraken_1042_sqlite.db') ##Run 1.1 OpSim database
#opsim_dbConn = desc.monitor.OpsimDBInterface('../../minion_1016_sqlite.db')
"""
Explanation: And finally we'll establish the connection the Opsim database.
End of explanation
"""
dm_visit_info = twinkles_dbConn.get_all_visit_info()
opsim_info = opsim_dbConn.get_summary_depth_info_for_visits(1427) #1427 is the Twinkles field ID
#Find entries with the proper obsHistIds in the Opsim data to match the Twinkles run
obs_list = []
for visit_id in dm_visit_info['visit_id']:
obs_list.append(np.where(opsim_info['obsHistID'] == visit_id)[0][0])
opsim_info = opsim_info[obs_list]
"""
Explanation: Load the visit info from DM processed PhoSim images and from Opsim
Our simplest error model wants to model flux errors as a function of visit seeing and 5-sigma depth. Therefore, we will need to pull this info from Opsim and from the DM databases to compare. The Monitor is set up to do this too.
End of explanation
"""
worker = desc.monitor.Monitor(twinkles_dbConn, truth_dbConn=truth_dbConn)
depth_curve = worker.measure_depth_curve()
seeing_curve = worker.measure_seeing_curve()
"""
Explanation: Use Monitor to assemble depth and seeing curves
We use the CcdVisit table to populate our 5-sigma depth curves and the seeing curves from the PhoSim images for each visit. We name our instance of the Monitor a worker since it does all the work of gathering data for us.
End of explanation
"""
fig = plt.figure()
bins = 15
n,bins,p = plt.hist(depth_curve.lightcurve['mag'], histtype='step', lw=4, bins=15, label='From CcdVisit', range=(21.5,26.5))
plt.hist(opsim_info['fiveSigmaDepth'], histtype='step', bins=bins, lw=4, label='Opsim Values')
plt.legend()
plt.xlabel('5 sigma depth (mags)')
plt.ylabel('Number of Visits')
plt.title('Twinkles 1.1 5-sigma Depth')
#plt.ylim(0, 6500)
"""
Explanation: Comparing 5-sigma depth:
Data Management Calculated Depth from PhoSim images
We use values from CcdVisit table in our Twinkles pserv database along with the following adapted from LSE-40, http://www.astro.washington.edu/users/ivezic/Teaching/Astr511/LSST_SNRdoc.pdf.
seeing is a value direct from the CcdVisit table.
$Noise$ is equivalent to the sky_noise values in the CcdVisit table.
$zero_point$ below is the zero_point value in the CcdVisit table.
We have that for a single Gaussian profile and in the $\textbf{background dominated limit}$:
$n_{eff} = 2.266*(\frac{seeing}{pixel_scale})^{2}$
$SNR = \frac{Signal}{Noise}$
$SNR = \frac{Source_Counts}{\sigma_{tot}*\sqrt{n_{eff}}}$
So that, SNR = 5 gives
$Source_Counts_{5\sigma} = 5 * \sigma_{tot} * \sqrt{n_{eff}}$
and to convert to flux in maggies we scale by the zeropoint and then multiply by 1e9 to get nanomaggies.
$F_{5} = \frac{10^{9}* Source_Counts_{5\sigma}}{zero_point}$
But, the version we use in the monitor is modified a bit to account for the Poisson noise of the source and thus derive a general form outside of the background dominated limit.
$SNR = \frac{Signal}{\sqrt{Signal + Noise^2}}$
$SNR = 5 = \frac{Source_Counts_{5\sigma}}{\sqrt{Source_Counts_{5\sigma} + (\sigma_{tot}*\sqrt{n_{eff}})^2}}$
We solve this as a quadratic equation and take the positive root.
$Source_Counts_{5\sigma} = \frac{25 + 25\sqrt{1 + \frac{4}{25}(\sigma_{tot}*\sqrt{n_{eff}})^2}}{2}$
and once again convert to flux in nanomaggies using the zeropoint and 1e9 conversion factor.
These results are the $\color{red}{red\ line}$ in the plot below.
Opsim Depth
We compare the results from PhoSim images with the FWHMeff values from the Summary table in the Opsim database.
Now we will plot and compare the two values for 5-sigma depth we get.
End of explanation
"""
fig = plt.figure(figsize=(18, 12))
fig_num = 1
for filter_val in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,fig_num)
n,bins,p = plt.hist(depth_curve.lightcurve['mag'][depth_curve.lightcurve['bandpass'] == str('lsst'+filter_val)],
histtype='step', lw=4, bins=15, label='CcdVisit', range=(21.5,26.5))
plt.hist(opsim_info['fiveSigmaDepth'][opsim_info['filter'] == filter_val], histtype='step', bins=bins, lw=4, label='Opsim Values')
if fig_num == 1:
plt.legend()
plt.xlabel('5 sigma depth (mags)')
plt.ylabel('Number of Visits')
#if fig_num == 1:
# plt.ylim(0, 2800)
plt.title(filter_val)
fig_num += 1
plt.suptitle('Twinkles 1.1 5-sigma Depth by filter')
plt.subplots_adjust(top=0.93)
"""
Explanation: And here we plot by filter.
End of explanation
"""
plt.hist(opsim_info['FWHMeff'] - seeing_curve.seeing_curve['seeing'], range=(-0.2, 0.4), bins=20)
plt.title(r'Opsim $FWHM_{\bf{eff}}$ - DM seeing')
plt.xlabel(r'Opsim $FWHM_{\bf{eff}}$ - DM seeing (arcsec)')
plt.ylabel('# of visits')
fig = plt.figure(figsize=(8,6))
plt.scatter(opsim_info['FWHMeff'], opsim_info['FWHMeff'] - seeing_curve.seeing_curve['seeing'])
l1, = plt.plot(np.arange(0, 1.8, 0.01), np.zeros(len(np.arange(0, 1.8, 0.01))), c='r', label='DM seeing = Opsim seeing')
plt.xlim(0, 1.8)
#plt.ylim(0, 1.8)
plt.xlabel(r'Opsim $FWHM_{\bf{eff}}$ (arcsec)')
plt.ylabel(r'Opsim $FWHM_{\bf{eff}}$ - DM seeing (arcsec)')
plt.legend([l1], ['PhoSim+DM seeing = Opsim seeing'], loc=2)
plt.title('Twinkles 1.1 Seeing Comparison')
"""
Explanation: It looks like there are discrepancies between the values measured using the PhoSim images and the Opsim values. We need to look into this. It looks like the DM calculated values in the PhoSim images are consistently deeper than the Opsim depth. Also, the effect looks the worst in the u and y filters. This indicates that there are differences that need to be explained between the PhoSim sky and the OpSim sky.
Seeing Comparison
End of explanation
"""
distances = worker.match_catalogs(return_distance=True, within_radius=1./3600.)
worker.calc_flux_residuals(depth_curve, seeing_curve)
"""
Explanation: There are also discrepencies here as well. It seems like the PhoSim+DM values are consistently under the Opsim values.
The large outliers were a result of cosmic ray repair not being turned on in the processing and was discussed in this issue. We therefore do not expect to see these particular outliers in the 1.3 data.
Comparing measured fluxes to the simulation "truth"
In the rest of this notebook we compare the flux values from our PhoSim images to the "true" values that we know from the input catalogs to the simulation. We start with a position matching method that finds matches between the objects of the observed catalogs and the truth catalogs based upon angular distance.
Then the worker calculates the flux residuals (observed - truth) and finds the mean and variance of this value among the objects in each visit.
Note: The following is currently Run 1.1 data since we have not finished the run 3 processing. As soon as it is available we will update these plots with 1.3 data.
End of explanation
"""
fig = plt.figure(figsize=(18,12))
i=1
for band in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,i)
worker.plot_bias_map(with_bins=12, in_band=band, use_existing_fig=fig)
i+=1
plt.tight_layout()
fig = plt.figure(figsize=(18,12))
i=1
for band in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,i)
worker.plot_bias_map(with_bins=12, in_band=band, use_existing_fig=fig,
normalize=True)
i+=1
plt.tight_layout()
fig = plt.figure(figsize=(18,12))
i=1
for band in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,i)
worker.plot_bias_scatter(in_band=band, use_existing_fig=fig)
i+=1
plt.gca().set_axis_bgcolor('k')
plt.tight_layout()
fig = plt.figure(figsize=(18,12))
i=1
for band in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,i)
worker.plot_bias_scatter(in_band=band, use_existing_fig=fig, normalize=True)
i+=1
plt.gca().set_axis_bgcolor('k')
plt.tight_layout()
fig = plt.figure(figsize=(18,12))
i=1
for band in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,i)
worker.plot_variance_map(with_bins=12, in_band=band, use_existing_fig=fig)
i+=1
plt.tight_layout()
fig = plt.figure(figsize=(18,12))
i=1
for band in ['u', 'g', 'r', 'i', 'z', 'y']:
fig.add_subplot(2,3,i)
worker.plot_variance_scatter(in_band=band, use_existing_fig=fig)
i+=1
plt.tight_layout()
"""
Explanation: Plotting Bias and Sigma
Now that we have the data we need in the form of mean flux residuals and mean squared flux residuals for each visit we can combine this with our depth and seeing information to construct plots that show the bias and sigma as functions of these values.
End of explanation
"""
|
gjwo/nilm_gjw_data | notebooks/disaggregation-hart-active_and_reactive(normal).ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import pandas as pd
from os.path import join
from pylab import rcParams
import matplotlib.pyplot as plt
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
#import nilmtk
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate.hart_85 import Hart85
from nilmtk.disaggregate import CombinatorialOptimisation
from nilmtk.utils import print_dict, show_versions
from nilmtk.metrics import f1_score
#import seaborn as sns
#sns.set_palette("Set3", n_colors=12)
import warnings
warnings.filterwarnings("ignore") #suppress warnings, comment out if warnings required
"""
Explanation: Disaggregation - Hart Active and Reactive data
Customary imports
End of explanation
"""
#uncomment if required
#show_versions()
"""
Explanation: Show versions for any diagnostics
End of explanation
"""
data_dir = '/Users/GJWood/nilm_gjw_data/HDF5/'
gjw = DataSet(join(data_dir, 'nilm_gjw_data.hdf5'))
print('loaded ' + str(len(gjw.buildings)) + ' buildings')
building_number=1
"""
Explanation: Load dataset
End of explanation
"""
gjw.set_window('2015-06-01 00:00:00', '2015-06-05 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
house = elec['fridge'] #only one meter so any selection will do
df = house.load().next() #load the first chunk of data into a dataframe
df.info() #check that the data is what we want (optional)
#note the data has two columns and a time index
plotdata = df.ix['2015-06-01 00:00:00': '2015-07-06 00:00:00']
plotdata.plot()
plt.title("Raw Mains Usage")
plt.ylabel("Power (W)")
plt.xlabel("Time");
plt.scatter(plotdata[('power','active')],plotdata[('power','reactive')])
plt.title("Raw Mains Usage Signature Space")
plt.ylabel("Reactive Power (VAR)")
plt.xlabel("Active Power (W)");
"""
Explanation: Period of interest 4 days during normal week
End of explanation
"""
h = Hart85()
h.train(mains,cols=[('power','active'),('power','reactive')],min_tolerance=100,noise_level=70,buffer_size=20,state_threshold=15)
h.centroids
plt.scatter(h.steady_states[('active average')],h.steady_states[('reactive average')])
plt.scatter(h.centroids[('power','active')],h.centroids[('power','reactive')],marker='x',c=(1.0, 0.0, 0.0))
plt.legend(['Steady states','Centroids'],loc=4)
plt.title("Training steady states Signature space")
plt.ylabel("Reactive average (VAR)")
plt.xlabel("Active average (W)");
labels = ['Centroid {0}'.format(i) for i in range(len(h.centroids))]
for label, x, y in zip(labels, h.centroids[('power','active')], h.centroids[('power','reactive')]):
plt.annotate(
label,
xy = (x, y), xytext = (-5, 5),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5))
h.steady_states.head()
h.steady_states.tail()
h.model
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
#plt.show()
h.pair_df
"""
Explanation: Training
We'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value.
End of explanation
"""
gjw.set_window('2015-07-13 00:00:00','2015-07-14 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
mains.plot()
"""
Explanation: Set two days for Disaggregation period of interest
Inspect the data during a quiet period when we were on holiday, should only be autonomous
appliances such as fidge, freeze and water heating + any standby devices not unplugged.
End of explanation
"""
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_filename = join(data_dir, 'disag_gjw_hart.hdf5')
output = HDFDataStore(disag_filename, 'w')
h.disaggregate(mains,output,sample_period=1)
output.close()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_hart = DataSet(disag_filename)
disag_hart
disag_hart_elec = disag_hart.buildings[building_number].elec
disag_hart_elec
disag_hart_elec.mains()
h.centroids
h.model
h.steady_states
from nilmtk.metrics import f1_score
f1_hart= f1_score(disag_hart_elec, test_elec)
f1_hart.index = disag_hart_elec.get_labels(f1_hart.index)
f1_hart.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("Hart");
"""
Explanation: Disaggregate using Hart (Active data only)
End of explanation
"""
|
EricChiquitoG/Simulacion2017 | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | mit | from IPython.display import YouTubeVideo
YouTubeVideo('k5yTVHr6V14')
"""
Explanation: ¿Cómo se mueve un péndulo?
Se dice que un sistema cualquiera, mecánico, eléctrico, neumático, etc., es un oscilador armónico si, cuando se deja en libertad fuera de su posición de equilibrio, vuelve hacia ella describiendo oscilaciones sinusoidales, o sinusoidales amortiguadas en torno a dicha posición estable.
- https://es.wikipedia.org/wiki/Oscilador_armónico
Referencias:
- http://matplotlib.org
- https://seaborn.pydata.org
- http://www.numpy.org
- http://ipywidgets.readthedocs.io/en/latest/index.html
En realidad esto es el estudio de oscilaciones.
<div>
<img style="float: left; margin: 0px 0px 15px 15px;" src="http://images.iop.org/objects/ccr/cern/51/3/17/CCast2_03_11.jpg" width="300px" height="100px" />
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://qph.ec.quoracdn.net/main-qimg-f7a6d0342e57b06d46506e136fb7d437-c" width="225px" height="50px" />
</div>
End of explanation
"""
%matplotlib inline
"""
Explanation: Los sistemas mas sencillos a estudiar en oscilaciones son el sistema masa-resorte y el péndulo simple.
<div>
<img style="float: left; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/7/76/Pendulum.jpg" width="150px" height="50px" />
<img style="float: right; margin: 15px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/ko/9/9f/Mass_spring.png" width="200px" height="100px" />
</div>
\begin{align}
\frac{d^2 x}{dt^2} + \omega_{0}^2 x &= 0, \quad \omega_{0} = \sqrt{\frac{k}{m}}\notag\
\frac{d^2 \theta}{dt^2} + \omega_{0}^{2}\, \theta &= 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}
\end{align}
Sistema masa-resorte
La solución a este sistema masa-resorte se explica en términos de la segunda ley de Newton. Para este caso, si la masa permanece constante y solo consideramos la dirección en $x$. Entonces,
\begin{equation}
F = m \frac{d^2x}{dt^2}.
\end{equation}
¿Cuál es la fuerza? Ley de Hooke!
\begin{equation}
F = -k x, \quad k > 0.
\end{equation}
Vemos que la fuerza se opone al desplazamiento y su intensidad es proporcional al mismo. Y $k$ es la constante elástica o recuperadora del resorte.
Entonces, un modelo del sistema masa-resorte está descrito por la siguiente ecuación diferencial:
\begin{equation}
\frac{d^2x}{dt^2} + \frac{k}{m}x = 0,
\end{equation}
cuya solución se escribe como
\begin{equation}
x(t) = A \cos(\omega_{o} t) + B \sin(\omega_{o} t)
\end{equation}
Y su primera derivada (velocidad) sería
\begin{equation}
\frac{dx(t)}{dt} = \omega_{0}[- A \sin(\omega_{0} t) + B\cos(\omega_{0}t)]
\end{equation}
<font color=red> Ver en el tablero que significa solución de la ecuación diferencial.</font>
¿Cómo se ven las gráficas de $x$ vs $t$ y $\frac{dx}{dt}$ vs $t$?
Esta instrucción es para que las gráficas aparezcan dentro de este entorno.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib as mpl
label_size = 14
mpl.rcParams['xtick.labelsize'] = label_size
mpl.rcParams['ytick.labelsize'] = label_size
"""
Explanation: _Esta es la librería con todas las instrucciones para realizar gráficos. _
End of explanation
"""
import numpy as np
# Definición de funciones a graficar
A, B, w0 = .5, .1, .5 # Parámetros
t = np.linspace(0, 50, 100) # Creamos vector de tiempo de 0 a 50 con 100 puntos
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
dx = w0*(-A*np.sin(w0*t)+B*np.cos(w0*t)) # Función de velocidad
# Gráfico
plt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño
plt.plot(t, x, '-', lw = 1, ms = 4,
label = '$x(t)$') # Explicación
plt.plot(t, dx, 'ro-', lw = 1, ms = 4,
label = r'$\dot{x(t)}$')
plt.xlabel('$t$', fontsize = 20) # Etiqueta eje x
plt.show()
# Colores, etiquetas y otros formatos
plt.figure(figsize = (7, 4))
plt.scatter(t, x, lw = 0, c = 'red',
label = '$x(t)$') # Gráfica con puntos
plt.plot(t, x, 'r-', lw = 1) # Grafica normal
plt.scatter(t, dx, lw = 0, c = 'b',
label = r'$\frac{dx}{dt}$') # Con la r, los backslash se tratan como un literal, no como un escape
plt.plot(t, dx, 'b-', lw = 1)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc = 'best') # Leyenda con las etiquetas de las gráficas
plt.show()
"""
Explanation: Y esta es la librería con todas las funciones matemáticas necesarias.
End of explanation
"""
frecuencias = np.array([.1, .2 , .5, .6]) # Vector de diferentes frecuencias
plt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño
# Graficamos para cada frecuencia
for w0 in frecuencias:
x = A*np.cos(w0*t)+B*np.sin(w0*t)
plt.plot(t, x, '*-')
plt.xlabel('$t$', fontsize = 16) # Etiqueta eje x
plt.ylabel('$x(t)$', fontsize = 16) # Etiqueta eje y
plt.title('Oscilaciones', fontsize = 16) # Título de la gráfica
plt.show()
"""
Explanation: Y si consideramos un conjunto de frecuencias de oscilación, entonces
End of explanation
"""
import seaborn as sns
sns.set(style='ticks', palette='Set2')
frecuencias = np.array([.1, .2 , .5, .6])
plt.figure(figsize = (7, 4))
for w0 in frecuencias:
x = A*np.cos(w0*t)+B*np.sin(w0*t)
plt.plot(t, x, 'o-',
label = '$\omega_0 = %s$'%w0) # Etiqueta cada gráfica con frecuencia correspondiente (conversion float a string)
plt.xlabel('$t$', fontsize = 16)
plt.ylabel('$x(t)$', fontsize = 16)
plt.title('Oscilaciones', fontsize = 16)
plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), prop={'size': 14})
plt.show()
"""
Explanation: Estos colores, son el default de matplotlib, sin embargo existe otra librería dedicada, entre otras cosas, a la presentación de gráficos.
End of explanation
"""
from ipywidgets import *
def masa_resorte(t = 0):
A, B, w0 = .5, .1, .5 # Parámetros
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact(masa_resorte, t = (0, 50,.01));
"""
Explanation: Si queremos tener manipular un poco mas las cosas, hacemos uso de lo siguiente:
End of explanation
"""
def masa_resorte(t = 0):
A, B, w0 = .5, .1, .5 # Parámetros
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact_manual(masa_resorte, t = (0, 50,.01));
"""
Explanation: La opción de arriba generalmente será lenta, así que lo recomendable es usar interact_manual.
End of explanation
"""
# Podemos definir una función que nos entregue theta dados los parámetros y el tiempo
def theta_t(a, b, g, l, t):
omega_0 = np.sqrt(g/l)
return a * np.cos(omega_0 * t) + b * np.sin(omega_0 * t)
# Hacemos un gráfico interactivo del péndulo
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t(.4, .6, 9.8, 2, t))
y = - 2 * np.cos(theta_t(.4, .6, 9.8, 2, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01));
"""
Explanation: Péndulo simple
Ahora, si fijamos nuestra atención al movimiento de un péndulo simple (oscilaciones pequeñas), la ecuación diferencial a resolver tiene la misma forma:
\begin{equation}
\frac{d^2 \theta}{dt^2} + \omega_{0}^{2}\, \theta = 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}.
\end{equation}
La diferencia más evidente es como hemos definido a $\omega_{0}$. Esto quiere decir que,
\begin{equation}
\theta(t) = A\cos(\omega_{0} t) + B\sin(\omega_{0}t)
\end{equation}
Si graficamos la ecuación de arriba vamos a encontrar un comportamiento muy similar al ya discutido anteriormente. Es por ello que ahora veremos el movimiento en el plano $xy$. Es decir,
\begin{align}
x &= l \sin(\theta), \quad
y = l \cos(\theta)
\end{align}
End of explanation
"""
# Solución:
def theta_t():
return
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t( , t))
y = - 2 * np.cos(theta_t(, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01));
"""
Explanation: Condiciones iniciales
Realmente lo que se tiene que resolver es,
\begin{equation}
\theta(t) = \theta(0) \cos(\omega_{0} t) + \frac{\dot{\theta}(0)}{\omega_{0}} \sin(\omega_{0} t)
\end{equation}
Actividad. Modificar el programa anterior para incorporar las condiciones iniciales.
End of explanation
"""
k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
x_0 = .5
dx_0 = .1
t = np.linspace(0, 50, 300)
x_t = x_0 *np.cos(omega_0 *t) + (dx_0/omega_0) * np.sin(omega_0 *t)
dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0 * np.cos(omega_0 * t)
plt.figure(figsize = (7, 4))
plt.plot(t, x_t, label = '$x(t)$', lw = 1)
plt.plot(t, dx_t, label = '$\dot{x}(t)$', lw = 1)
#plt.plot(t, dx_t/omega_0, label = '$\dot{x}(t)$', lw = 1) # Mostrar que al escalar, la amplitud queda igual
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), prop={'size': 14})
plt.xlabel('$t$', fontsize = 18)
plt.show()
plt.figure(figsize = (5, 5))
plt.plot(x_t, dx_t/omega_0, 'ro', ms = 2)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
plt.figure(figsize = (5, 5))
plt.scatter(x_t, dx_t/omega_0, cmap = 'viridis', c = dx_t, s = 8, lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
"""
Explanation: Plano fase $(x, \frac{dx}{dt})$
La posición y velocidad para el sistema masa-resorte se escriben como:
\begin{align}
x(t) &= x(0) \cos(\omega_{o} t) + \frac{\dot{x}(0)}{\omega_{0}} \sin(\omega_{o} t)\
\dot{x}(t) &= -\omega_{0}x(0) \sin(\omega_{0} t) + \dot{x}(0)\cos(\omega_{0}t)]
\end{align}
End of explanation
"""
k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
t = np.linspace(0, 50, 50)
x_0s = np.array([.7, .5, .25, .1])
dx_0s = np.array([.2, .1, .05, .01])
cmaps = np.array(['viridis', 'inferno', 'magma', 'plasma'])
plt.figure(figsize = (6, 6))
for indx, x_0 in enumerate(x_0s):
x_t = x_0 *np.cos(omega_0 *t) + (dx_0s[indx]/omega_0) * np.sin(omega_0 *t)
dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0s[indx] * np.cos(omega_0 * t)
plt.scatter(x_t, dx_t/omega_0, cmap = cmaps[indx],
c = dx_t, s = 10,
lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
#plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5))
"""
Explanation: Multiples condiciones iniciales
End of explanation
"""
|
yangliuy/yangliuy.github.io | markdown_generator/publications.ipynb | mit | !cat publications.tsv
"""
Explanation: Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in publications.py. Run either from the markdown_generator folder after replacing publications.tsv with one containing your data.
TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.
Data format
The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top.
excerpt and paper_url can be blank, but the others must have values.
pub_date must be formatted as YYYY-MM-DD. (Update in 0421, add a function to transfer pub_date like 8/7/2017 to 2017-08-07)
url_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
End of explanation
"""
import pandas as pd
"""
Explanation: Import pandas
We are using the very handy pandas library for dataframes.
End of explanation
"""
publications = pd.read_csv("publications.tsv", sep="\t", header=0)
publications
"""
Explanation: Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
End of explanation
"""
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
"""
Explanation: Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
End of explanation
"""
import os
for row, item in publications.iterrows():
md_filename = str(item.pub_date) + "-" + item.url_slug + ".md"
html_filename = str(item.pub_date) + "-" + item.url_slug
year = item.pub_date[:4]
## YAML variables
md = "---\ntitle: \"" + item.title + '"\n'
md += """collection: publications"""
md += """\npermalink: /publication/""" + html_filename
if len(str(item.excerpt)) > 5:
md += "\nexcerpt: '" + html_escape(item.excerpt) + "'"
md += "\ndate: " + str(item.pub_date)
md += "\nvenue: '" + html_escape(item.venue) + "'"
if len(str(item.paper_url)) > 5:
md += "\npaperurl: '" + item.paper_url + "'"
md += "\ncitation: '" + html_escape(item.citation) + "'"
md += "\n---"
## Markdown description for individual page
if len(str(item.excerpt)) > 5:
md += "\n" + html_escape(item.excerpt) + "\n"
if len(str(item.paper_url)) > 5:
md += "\n[Download paper here](" + item.paper_url + ")\n"
md += "\nRecommended citation: " + item.citation
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
"""
Explanation: Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
End of explanation
"""
!ls ../_publications/
!cat ../_publications/2009-10-01-paper-title-number-1.md
"""
Explanation: These files are in the publications directory, one directory below where we're working from.
End of explanation
"""
|
taraokelly/Problem-set-Jupyter-Pyplot-and-Numpy | solutions/solution.ipynb | mit | import numpy as np
# Load in data from csv file.
sepal_length, sepal_width, petal_length, petal_width = np.genfromtxt('../data/IRIS.csv', delimiter=',', usecols=(0,1,2,3), unpack=True, dtype=float)
iris_class = np.genfromtxt('../data/IRIS.csv', delimiter=',', usecols=(4), unpack=True, dtype=str)
# Loaded the columns into separate variables for ease of use.
"""
Explanation: Problem-set-Jupyter-Pyplot-and-Numpy
Write a note about the data set
Fisher's Iris Data Set is a well known data set that has become a common test case in machine learning. Each row in the data set is comprised of four numeric values for petal length, petal width, sepal length and sepal width. The row also contains the type of iris flower (one of three: Iris setosa, Iris versicolor, or Iris virginica).
According to Lichman [1],
"One class is linearly separable from the other 2; the latter are NOT linearly separable from each other".
Types are clustered together and can be analysed to distinguish or predict the type of iris flower by it's measurements (petal length, petal width, sepal length and sepal width)[2].
References:
[1] Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
[2] True, Joseph - Content Data Scientist (2015). IMB Watson Analytics [https://www.ibm.com/communities/analytics/watson-analytics-blog/watson-analytics-use-case-the-iris-data-set/].
Get and load the data
End of explanation
"""
import matplotlib.pyplot as plt
# Plot Sepal Length on the x-axis and Sepal Width on the y-axis; complete with labels.
# Scale graph to a bigger size
plt.rcParams['figure.figsize'] = (14.0, 6.0)
# Set title
plt.title('Iris Data Set: Sepal Measurements', fontsize=16)
# plot scatter graph
plt.scatter(sepal_length, sepal_width)
# Add labels
plt.xlabel('Sepal Length', fontsize=14)
plt.ylabel('Sepal Width', fontsize=14)
# Output Graph
plt.show()
"""
Explanation: Create a simple plot
End of explanation
"""
# https://matplotlib.org/users/legend_guide.html
import matplotlib.patches as mp
# https://stackoverflow.com/questions/27318906/python-scatter-plot-with-colors-corresponding-to-strings
colours = {'Iris-setosa': 'r', 'Iris-versicolor': 'g', 'Iris-virginica': 'b'}
plt.scatter(sepal_length, sepal_width, c=[colours[i] for i in iris_class], label=[colours[i] for i in colours])
# Add title
plt.title('Iris Setosa, Versicolor, and Virginica: Sepal Measurements', fontsize=16)
# Add labels
plt.xlabel('Sepal Length', fontsize=14)
plt.ylabel('Sepal Width', fontsize=14)
# https://matplotlib.org/api/patches_api.html
plt.legend(handles = [mp.Patch(color=colour, label=label) for label, colour in [('Iris Setosa', 'r'), ('Iris Versicolor', 'g'), ('Iris Virginica', 'b')]])
plt.show()
"""
Explanation: Create a more complex plot
End of explanation
"""
import seaborn as sns
import pandas as pd
# Prepare data with pandas DataFrame for seaborn usage.
df = pd.DataFrame(dict(zip(['Sepal Length', 'Sepal Width','Petal Length', 'Petal Width', 'Iris Class'], [sepal_length, sepal_width, petal_length, petal_width, iris_class])))
df
# Adapted from: https://seaborn.pydata.org/examples/scatterplot_matrix.html
%matplotlib inline
sns.pairplot(df, hue="Iris Class")
"""
Explanation: Use seaborn
End of explanation
"""
# Reset size after seaborn
plt.rcParams['figure.figsize'] = (14.0, 6.0)
# https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/simple-linear-regression.ipynb
# Calculate the best values for m and c.
m, c = np.polyfit(petal_length, petal_width, 1)
# Plot Setosa measurements
plt.scatter(petal_length, petal_width,marker='o', label='Data Set')
# Plot best fit line
plt.plot(petal_length, m * petal_length + c, 'forestgreen', label='Best fit line')
# Add title
plt.title('Iris Data Set: Petal Measurements', fontsize=16)
# Add labels
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
plt.legend()
# Print graph
plt.show()
"""
Explanation: Fit a line
End of explanation
"""
# Calculate the R-squared value for our data set using numpy.
np.corrcoef(petal_length, petal_width)[0][1]**2
"""
Explanation: Calculate the R-squared value
End of explanation
"""
# https://stackoverflow.com/questions/27947487/is-zip-the-most-efficient-way-to-combine-arrays-with-respect-to-memory-in-nump
# Combine arrays
iris_data = np.column_stack((sepal_length, sepal_width, petal_length, petal_width,iris_class))
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html
# Filter Data with 'Iris-setosa' & transpose after
filter_setosa = (iris_data[np.in1d(iris_data[:,4],'Iris-setosa')]).transpose()
# https://stackoverflow.com/questions/3877491/deleting-rows-in-numpy-array
# https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.chararray.astype.html
# Prepare data - delete row of unnecessary data and cover types to float
setosa_data = (np.delete(filter_setosa, (4), axis=0)).astype(np.float)
setosa_data
# https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/simple-linear-regression.ipynb
# Calculate the best values for m and c.
m, c = np.polyfit(setosa_data[2], setosa_data[3], 1)
# Plot Setosa measurements
plt.scatter(setosa_data[2],setosa_data[3],marker='o', label='Iris Setosa')
# Plot best fit line
plt.plot(setosa_data[2], m * setosa_data[2] + c, 'forestgreen', label='Best fit line')
# Add title
plt.title('Iris Setosa: Petal Measurements', fontsize=16)
# Add labels
plt.xlabel('Petal Length', fontsize=14)
plt.ylabel('Petal Width', fontsize=14)
plt.legend()
# Print graph
plt.show()
"""
Explanation: Fit another line
End of explanation
"""
# Calculate the R-squared value for the Setosa data using numpy.
np.corrcoef(setosa_data[2], setosa_data[3])[0][1]**2
"""
Explanation: Calculate the R-squared value
End of explanation
"""
# Calculate the partial derivative of cost with respect to m while treating c as a constant.
def gradient_descent_m(x, y, m, c):
return -2.0 * np.sum(x * (y - m * x - c))
# Calculate the partial derivative of cost with respect to c while treating m as a constant.
def gradient_descent_c(x, y, m , c):
return -2.0 * np.sum(y - m * x - c)
eta = 0.0001
g_m, g_c = 1.0, 1.0
change = True
# Iterate the partial derivatives until the outcomes do not change
while change:
g_m_new = g_m - eta * gradient_descent_m(setosa_data[2], setosa_data[3], g_m, g_c)
g_c_new = g_c - eta * gradient_descent_c(setosa_data[2], setosa_data[3], g_m, g_c)
if g_m == g_m_new and g_c == g_c_new:
change = False
else:
g_m, g_c = g_m_new, g_c_new
"""
Explanation: Use gradient descent
Gradient Descent is an approximation technique. To utilize this approximation technique, we guess the value that we wish to approximate and iteratively improve that guess.
End of explanation
"""
# Plot Setosa measurements
plt.scatter(setosa_data[2],setosa_data[3],marker='o', label='Iris Setosa')
# Plot best fit line according to Gradient Descent
plt.plot(setosa_data[2], g_m * setosa_data[2] + g_c, 'forestgreen', label='Best fit line: Gradient Descent')
# Add title
plt.title('Iris Setosa: Petal Measurements', fontsize=16)
# Add labels
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
plt.legend()
# Print graph
plt.show()
"""
Explanation: To the human eye it is difficult to see a difference between the best fit line and the best fit line approximated by gradient descent.
End of explanation
"""
print("BEST LINE: m: %20.16f c: %20.16f" % (m, c))
print()
print("GRADIENT DESCENT: m: %20.16f c: %20.16f" % (g_m, g_c))
"""
Explanation: However, the results from the two techniques are in fact different. With both results for m and c differing after the eleventh decimal point, the gradient descent technique did manage to approximate adequate results; although inexact and inaccurate.
End of explanation
"""
|
sisnkemp/deep-learning | intro-to-tensorflow/intro_to_tensorflow.ipynb | mit | import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
"""
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
"""
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
"""
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
"""
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
return 0.1 + (image_data * 0.8) / 255
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
"""
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
"""
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
"""
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
"""
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
"""
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
"""
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
"""
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
"""
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
"""
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation
"""
|
sassoftware/sas-viya-programming | python/python-integration-viya/SAS Viya_ CAS & Python Integration Workshop.ipynb | apache-2.0 | ## Data Management
import swat
import pandas as pd
## Data Visualization
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
## Global Options
swat.options.cas.trace_actions = False # Enabling tracing of actions (Default is False. Will change to true later)
swat.options.cas.trace_ui_actions = False # Display the actions behind “UI” methods (Default is False. Will change to true later)
pd.set_option('display.max_columns', 500) # Modify DataFrame max columns shown
pd.set_option('display.max_colwidth', 1000) # Modify DataFrame max column width
"""
Explanation: SAS Viya, CAS & Python Integration Workshop
Notebook Summary
Set Up
Exploring CAS Action Sets and the CASResults Object
Working with a SASDataFrame
Exploring the CAS File Structure
Loading Data Into CAS
Exploring Table Details
Data Exploration
Filtering Data
Data Preparation
SQL
Analyzing Data
Promote the Table to use in SAS Visual Analytics
SAS Viya
What is SAS Viya
SAS Viya extends the SAS Platform, operates in the cloud (as well as in hybrid and on-prem solutions) and is open source-friendly. For better performance while manipulating data and running analytical procedures, SAS Viya can run your code in Cloud Analytic Services (CAS). CAS operates on in-memory data, removing the read/write transfer overhead. Further, it enables everyone in an organization to collaborate and work with data by providing a variety of products and solutions running in CAS.
Cloud Analytic Services (CAS)
SAS Viya processes data and performs analytics using SAS Cloud Analytic Services, or CAS for short. CAS provides a powerful distributed computing environment designed to store large data sets in memory for fast and efficient processing. It uses scalable, high-performance, multi-threaded algorithms to rapidly perform analytical processing on in-memory data of any size.
For more information about Cloud Analytic Services, visit the documentation: SAS® Cloud Analytic Services 3.5: Fundamentals
SAS Viya is Open
SAS Viya is open. Business analysts and data scientists can explore, prepare and manage data to provide insights, create visualizations or analytical models using the SAS programming language or a variety of open source languages like Python, R, Lua, or Java. Because of this, programmers can easily process data in CAS, using a language of their choice.
<a id='1'>1. Set Up
a. Import Packages
Visit the documentation for the SWAT (SAS Scripting Wrapper for Analytics Transfer) package.
End of explanation
"""
conn = swat.CAS("server", 8777, "student", "Metadata0", protocol="http")
conn
"""
Explanation: b. Make a Connection to CAS</a>
To connect to the CAS server you will need:
the host name,
the portnumber,
your user name, and your password.
Visit the documentation Getting Started with SAS® Viya® 3.5 for Python for more information about connecting to CAS.
Be aware that connecting to the CAS server can be implemented in various ways, so you might need to see your system administrator about how to make a connection. Please follow company policy regarding authentication.
End of explanation
"""
conn.fileinfo()
## Download the data from github and load to the CAS server
conn.read_csv(r"https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv",
casout={"name":"cars", "caslib":"casuser", "replace":True})
## Save the in-memory table as a physical file
conn.save(table="cars", name="cars.sashdat",
caslib="casuser",
replace=True)
## Drop the in-memory table
conn.droptable(name='cars', caslib="casuser")
"""
Explanation: c. Obtain Data for the Demo
End of explanation
"""
conn.builtins.actionSetInfo()
"""
Explanation: <a id='2'>2. Exploring CAS Action Sets and the CASResults Object</a>
Think of action sets as a package, and all the actions inside an action set as a method.
CAS actions interact with the CAS server and return a CASResults object.
A CASResults object is simply an ordered Python dictionary with a few extra methods and attributes added.
You can also use the SWAT package API to interact with the CAS server. The SWAT package contains many of the methods defined by Pandas DataFrames. Using methods from the SWAT API will typically return a CASTable, CASColumn, pandas.DataFrame, or pandas.Series object.
Documentation:
- To view all CAS action sets and actions visit the documentation: SAS® Viya® 3.5 Actions and Action Sets by Name and Product
To view the SWAT API Reference visit: API Reference
a. View All the CAS Action Sets that are Loaded in CAS.
From the Builtins action set, use the actionSetInfo action, to view all loaded action sets.
CAS action sets and actions are case insensitive.
CAS actions return a CASResults object.
End of explanation
"""
conn.help(actionSet="builtins")
"""
Explanation: View the available CAS actions in the builtins action set using the help function.
End of explanation
"""
conn.actionSetInfo()
"""
Explanation: You do not need to specify the CAS action set prior to the CAS action. Moving forward, all actions will not include the CAS action set.
End of explanation
"""
type(conn.actionSetInfo())
"""
Explanation: All CAS actions return a CASResults object.
End of explanation
"""
conn.actionSetInfo().keys()
"""
Explanation: b. CASResults Object
A CASResults object is an ordered Python dictionary with keys and values.
A CASResults object is local data returned by the CAS server.
While all CAS actions return a CASResults object, there are no rules about how many keys are contained in the object, or what values are returned.
View the keys in the CASResults object. This specific CASResults object contains a single key, and a single value.
End of explanation
"""
conn.actionSetInfo()['setinfo']
"""
Explanation: Call the setinfo key to return the value.
End of explanation
"""
type(conn.actionSetInfo()['setinfo'])
"""
Explanation: The setinfo key holds a SASDataFrame object.
End of explanation
"""
df = conn.actionSetInfo()['setinfo']
type(df)
"""
Explanation: <a id='3'>3. Working with a SASDataFrame
A SASDataFrame object contains local data.
A SASDataFrame object is a subclass of a Pandas DataFrame. You can work with them as you normally do a Pandas DataFrame.
NOTE: When bringing data from CAS locally, remember that CAS can hold larger data than your local computer can handle.
a. Create a SASDataFrame Object Named df.
End of explanation
"""
df.head()
"""
Explanation: A SASDataFrame is local data. Work with it as you would a Pandas DataFrame.
b. Use Pandas Methods on a SASDataFrame.
View the first 5 rows of the SASDataFrame using the pandas head method.
End of explanation
"""
df.loc[df['actionset']=='simple',['actionset','label']]
"""
Explanation: Find all rows where the value in the actionset column equals simple using the pandas loc method.
End of explanation
"""
df['product_name'].value_counts().plot(kind="bar")
"""
Explanation: View counts of unique values using the pandas value_counts method and plot a bar chart.
End of explanation
"""
conn.caslibInfo()
"""
Explanation: <a id='4'> 4. Exploring the CAS File Structure</a>
Caslib Overview:
A caslib has two parts:
Data Source - Connection information about the data source gives access to a resource that contains data. These can be files that are located in a file system, a database, streaming data from an ESP (Event Stream Processing) server, or other data sources that SAS can access.
In-Memory Space - The in-memory portion of a caslib that contains data that is uploaded into memory and ready for processing.
Think of your active caslib as the current working directory of your CAS session, and it's only possible to have one active caslib.
When you want to work with data from your data source, you must load the data into the in-memory portion for processing. This loaded table is known as a CAS Table.
Types of Caslibs:
Personal Caslib - By default, all users are given access to their own caslib, named CASUSER, within a CAS session. This is a personal caslib and is only accessible to the user who owns the CAS session.
Pre-defined Caslib - These are defined by an administrator and are available to all CAS sessions (dependent on access controls). Think of these as different folders for different units of a business. You can have an HR caslib with HR data, Marketing caslib with Marketing data, etc.
Manually added Caslib - These can be added at any point to perform ad-hoc analysis within CAS.
Caslib Scope
Session Caslib - When a caslib is defined without including the GLOBAL option, the caslib is a session-scoped caslib. When a table is loaded to the CAS server with session-scoped caslib, the table is available to that specific CAS user session only. Think of session scope as local to that specific session only.
Global Caslib -These are available to anyone who has access to the CAS Server (dependent on access controls). The name of these caslibs must be unique across all CAS sessions on the server.
For additional information about caslibs:
- Watch SAS® Viya™ CAS Libraries (Caslibs) Simplified
- SAS® Cloud Analytic Services 3.5: Fundamentals - Caslibs
a. View all Available Caslibs
Depending on your CAS server setup, you might already have one or more caslibs configured and ready to use.
If you do not have ReadInfo permissions on a caslib, then you will not see the caslib.
View all available caslibs using the casLibInfo action.
End of explanation
"""
conn.fileInfo(caslib="casuser")
"""
Explanation: b. View Available Files in the casuser Caslib
End of explanation
"""
conn.tableInfo(caslib="casuser")
"""
Explanation: c. View All Available In-Memory Tables in the casuser Caslib
NOTE: Tables need to be in-memory to be processed by CAS.
End of explanation
"""
conn.fileInfo(caslib="casuser")
"""
Explanation: <a id='5'>5. Loading Data Into CAS
There are various ways of loading data into CAS:
1. server-side data
2. client-side parsed
3. client-side files uploaded and parsed on the server
They follow these naming conventions:
load*: Loads server-side data
read_*: Uses client-side parsers and then uploads the result into CAS
upload*: Uploads client-side files as is, which are parsed on the server
For more information about loading client side files to CAS: Two Simple Ways to Import Local Files with Python in CAS (Viya 3.5)
a. Loading Server-Side Data into Memory.
View the available files in the casuser caslib.
End of explanation
"""
# 1. Load the table into CAS. Will return a CASResults object.
conn.loadtable(path="cars.sashdat", caslib="casuser",
casout={"caslib":"casuser","name":"cars", "replace":True})
conn.tableInfo(caslib="casuser")
# 2. Create a reference to the in-memory table
castbl = conn.CASTable("cars",caslib="casuser")
"""
Explanation: There are two methods that can be used to load server-side data into CAS:
- loadtable - Loads a table into CAS and returns a CASResults object.
- load_path - Convenience method. Similar to loadtable, load_path loads a table into CAS and returns a reference to that CAS table in one step.
loadtable
End of explanation
"""
# Load the table into CAS and create a reference to that table in one step.
##castbl = conn.load_path(path="cars.sashdat", caslib="casuser",
## casout={"caslib":"casuser","name":"cars", "replace":True})
"""
Explanation: load_path
End of explanation
"""
type(castbl)
print(castbl)
"""
Explanation: b. Local vs CAS Data
A CASTable object is a reference to data in the CAS server. Actions or methods run on a CASTable object are processed in CAS.
End of explanation
"""
castbl.head()
"""
Explanation: View the first 5 rows of the in-memory table using the head method. The head method is not a CAS action, so it will not return a CASResults object. The head method is using the API to CAS. The API to CAS contains many of the pandas methods you are familiar with. These methods process the data in CAS and can return a variety of different objects locally.
SWAT API Reference
End of explanation
"""
type(castbl.head())
"""
Explanation: The results of using the head method returns a SASDataFrame. SASDataFrames are located on locally.
End of explanation
"""
castbl.fetch(to=5)
"""
Explanation: You can use the fetch CAS action to return similar results. The processing of the fetch CAS action occurs in CAS and returns a CASResults object to your local machine. When using a CAS action a CASResults object is always returned.
End of explanation
"""
type(castbl.fetch(to=5))
"""
Explanation: CASResults objects are local.
End of explanation
"""
type(castbl.fetch(to=5)['Fetch'])
"""
Explanation: SASDataFrame objects can be contained in the CASResults object.
End of explanation
"""
swat.options.cas.trace_actions = True
swat.options.cas.trace_ui_actions = True
"""
Explanation: Turn on tracing.
End of explanation
"""
castbl.shape
"""
Explanation: <a id='6'>6. Exploring Table Details
a. View the Number of Rows and Columns in the In-Memory Table.
Use shape to return a tuple of the CAS data.
End of explanation
"""
castbl.numRows()
"""
Explanation: Use the numRows CAS action to shows the number of rows in a CAS table.
End of explanation
"""
castbl.tableInfo()
"""
Explanation: Use the tableInfo CAS action to show information about a CAS table.
End of explanation
"""
def details(tbl):
sasdf = tbl.tableInfo()["TableInfo"].set_index("Name").loc[:,["Rows","Columns"]]
return sasdf
details(castbl)
"""
Explanation: Create a function to return the in-memory table name, number of rows and columns.
End of explanation
"""
castbl.columnInfo()
castbl.dtypes
"""
Explanation: b. View the Column Information
End of explanation
"""
castbl.summary()
"""
Explanation: <a id='7'>7. Data Exploration
a. Summary Statistics
Using the summary CAS action to generate descriptive statistics of numeric variables.
End of explanation
"""
castbl.describe()
"""
Explanation: Using the describe method.
End of explanation
"""
swat.options.cas.trace_actions = False
swat.options.cas.trace_ui_actions = False
"""
Explanation: Turn off tracing.
End of explanation
"""
castbl.distinct()
"""
Explanation: b. Distinct Values
Use the distinct CAS action to calculate the number of distinct values in the cars table.
End of explanation
"""
castbl.distinct()['Distinct'] \
.set_index("Column") \
.loc[:,['NMiss']] \
.plot(kind='bar')
"""
Explanation: Plot the number of missing values for each column.
End of explanation
"""
castbl.distinct(inputs=["Origin","Type","Make"])
"""
Explanation: Use the distinct CAS action to calculate the number of distinct values in the Origin, Type and Make columns using the distinct CAS action.
End of explanation
"""
castbl.distinct(inputs=["Origin","Type","Make"],
casout={"caslib":"casuser", ## Create a new CAS table in casuser
"name":"castblDistinct", ## Name the table castblDistinct
"replace":True}) ## Replace if exists
"""
Explanation: Create a new CAS table named castblDistinct with the number of distinct values for the specified inputs.
End of explanation
"""
conn.tableInfo()
"""
Explanation: View the available in-memory tables.
End of explanation
"""
castbl.Cylinders.nunique()
castbl.Cylinders.isnull().sum()
"""
Explanation: Using Pandas methods.
End of explanation
"""
castbl.freq(inputs=["Origin"])
"""
Explanation: c. Frequency
View the frequency of the Origin column using the freq CAS action.
End of explanation
"""
## Perform the processing in CAS and store the summary in the originFreq object.
originFreq = castbl.freq(inputs=["Origin"])['Frequency']
## Graph the summarized local data.
originFreq.loc[:,["CharVar","Frequency"]] \
.sort_values(by="Frequency", ascending=False) \
.set_index("CharVar") \
.plot(kind="bar")
"""
Explanation: Plot the resuls of the freq CAS action in a bar chart.
End of explanation
"""
castbl['Origin'].value_counts().plot(kind='bar')
"""
Explanation: Use the value_counts method. The value_counts method will process in CAS and return the summary locally. The plot method will create the graph locally.
End of explanation
"""
castbl.freq(inputs=["Origin","Make","Type","DriveTrain"])
"""
Explanation: Perform a frequency on mulitple columns. The final CASResults object will contain a SASDataFrame with a frequency of each of the specified columns in one table.
End of explanation
"""
distinctCars = castbl.distinct()['Distinct']
distinctCars.loc[distinctCars["NDistinct"]<=20,:]
"""
Explanation: D. Create a Frequency Table of all Columns with Less Than 20 Distinct Values.
Use the distinct CAS action to find the number of distinct values for each column and filter for all columns with less than 20 distinct values.
End of explanation
"""
distinctCars = distinctCars.loc[distinctCars["NDistinct"]<=20,:]
"""
Explanation: Create a variable named distinctCars that holds the SASDataFrame from the results above.
End of explanation
"""
listCars = distinctCars.Column.unique().tolist()
print(listCars)
"""
Explanation: Create a list of column names that have less than 20 distinct values named listCars.
End of explanation
"""
castbl.freq(inputs=listCars)
"""
Explanation: Use the list from above to create a frequency table of columns with less than 20 distinct values.
End of explanation
"""
castbl[castbl["Make"]=="Toyota"].head()
castbl[(castbl["Make"]=="Toyota") & (castbl["Type"]=="Hybrid")].head()
"""
Explanation: <a id='8'>8. Filtering Data
a. Subset Using Pandas Indexing Expressions.
End of explanation
"""
castbl.query("Make='Toyota'").head()
castbl.query("Make='Toyota' and Type='Hybrid'").head()
"""
Explanation: b. Subset Using the Query Method.
End of explanation
"""
castbl["avgMPG"] = (castbl["MPG_City"] + castbl["MPG_Highway"])/2
castbl
castbl.head()
"""
Explanation: <a id='9'>9. Data Preparation
Create a new column that calculates the average of MPG_City and MPG_Highway. Processing done in CAS.
End of explanation
"""
cols = ['Make', 'Type', 'Origin', 'DriveTrain','Invoice',
'EngineSize', 'Cylinders', 'Horsepower', 'MPG_City',
'MPG_Highway', 'Weight', 'Wheelbase', 'Length', 'avgMPG']
castbl = castbl[cols]
castbl
castbl.head()
"""
Explanation: Remove the Model and MSRP columns.
End of explanation
"""
conn.actionSetInfo(all=True)['setinfo']
"""
Explanation: <a id='10'>10. SQL
a. Load the fedSQL CAS Action Set
View all available (not just loaded) CAS action sets by using the all=True parameter.
End of explanation
"""
actionSets = conn.actionSetInfo(all=True)['setinfo']
actionSets.loc[actionSets['actionset'].str.upper().str.contains("SQL")]
"""
Explanation: Search the actionset column for any CAS action set that contains the string sql.
End of explanation
"""
conn.loadActionSet(actionSet="fedSQL")
conn.actionSetInfo()
conn.help(actionSet="fedSQL")
"""
Explanation: Load the fedSQL action set using the loadActionSet action.
End of explanation
"""
conn.execdirect("""select *
from cars
limit 10""")
"""
Explanation: b. Run SQL Queries in CAS
Run a query to view the first 10 rows of the cars table.
End of explanation
"""
conn.execdirect("""select Make, round(avg(MSRP)) as avgMSRP
from cars
group by Make""")
"""
Explanation: Find the average MSRP of each car make.
End of explanation
"""
conn.execdirect("""create table make_avg as
select Make, round(avg(MSRP)) as test
from cars
group by Make""")
conn.tableInfo(caslib="casuser")
"""
Explanation: Create a table named make_avg that contains the average MSRP of each car make.
End of explanation
"""
castbl.head()
"""
Explanation: <a id='11'>11. Analyzing Data
Preview the table.
End of explanation
"""
castbl.correlation(inputs=["MSRP","EngineSize","HorsePower","MPG_City"], simple=False)
"""
Explanation: a. Correlation with a Heat Map
Use the correlation action and remove the simple statistics. Processing will be done in CAS and the summary table will be returned locally.
End of explanation
"""
dfCorr = castbl.correlation(inputs=["MSRP","EngineSize","HorsePower","MPG_City"], simple=False)['Correlation']
dfCorr
"""
Explanation: Store the SASDataFrame object in the dfCorr variable. A SASDataFrame object is local.
End of explanation
"""
dfCorr.set_index("Variable", inplace=True)
dfCorr
"""
Explanation: Replace the default index with the Variable column
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.heatmap(dfCorr, cmap="YlGnBu", annot=True)
ax.set_ylim(len(dfCorr),-.05) ## Truncation with defaults. Need to adjust limits. Fixed in newer verison of matplotlib.
"""
Explanation: Use seaborn to produce a heatmap.
End of explanation
"""
castbl.histogram(inputs=["avgMPG"])
"""
Explanation: b. Histogram
Run the histogram action to return a summary of the midpoints and percents. Processing occurs in CAS.
End of explanation
"""
mpgHist = castbl.histogram(inputs="avgMPG")['BinDetails']
"""
Explanation: Store the BinDetails in the variable mpgHist.
End of explanation
"""
mpgHist['Percent'] = mpgHist['Percent'].round(1)
mpgHist['MidPoint'] = mpgHist['MidPoint'].round(1)
mpgHist[["MidPoint","Percent"]].head()
"""
Explanation: Round the columns Percent and MidPoint.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.barplot(x="MidPoint", y="Percent", data=mpgHist)
ax.set_title("Histogram of MPG")
"""
Explanation: Plot the histogram.
End of explanation
"""
castbl.histogram(inputs=["avgMPG", "HorsePower"])
"""
Explanation: Specify multiple columns in the histogram action.
End of explanation
"""
carsHist = castbl.histogram(inputs=["avgMPG", "HorsePower"])['BinDetails']
"""
Explanation: Store the results from the histogram CAS action in the carsHist variable.
End of explanation
"""
list(carsHist.Variable.unique())
"""
Explanation: Find the unique values in the carsHist SASDataFrame.
End of explanation
"""
for i in list(carsHist.Variable.unique()):
carsHist['Percent'] = carsHist['Percent'].round(1)
carsHist['MidPoint'] = carsHist['MidPoint'].round(1)
df = carsHist[carsHist["Variable"]==i]
df.plot.bar(x='MidPoint', y='Percent')
"""
Explanation: Run a loop through the list of unique values and plot a histogram for each.
End of explanation
"""
castbl.head()
castbl
"""
Explanation: <a id='12'>12. Promote the Table to use in SAS Visual Analytics
End of explanation
"""
castbl.save(name="updatedCars.sashdat", caslib="casuser")
"""
Explanation: Two Options:
Save the castbl object as a physical file
Create a new in-memory table from the castbl object.
a. Save the castbl Object as a Physical File.
Use the save CAS action to save the castbl object as a physical file. Here we will save it as a sashdat file.
End of explanation
"""
conn.fileInfo(caslib="casuser")
"""
Explanation: View the available files in the casuser caslib. Notice the updatedCars.sashdat file is available.
End of explanation
"""
castbl.partition(casout={"caslib":"casuser","name":"cars_update"})
"""
Explanation: b. Create a New In-Memory Table From the castbl Object.
The partition CAS action has a variety of options, but if we leave the defaults we can take the castbl object (reference to the cars table with a few columns dropped and the new avgMPG column) and create a new in-memory table without saving a physical file.
Here a new in-memory table will be created called cars_update in the casuser caslib from the castbl object.
End of explanation
"""
conn.tableInfo(caslib="casuser")
"""
Explanation: View the new in-memory table cars_update.
End of explanation
"""
conn.fileInfo(caslib="casuser")
"""
Explanation: View the files in the casuser caslib. Notice no new files were created.
End of explanation
"""
conn.tableInfo(caslib="casuser")['TableInfo'][['Name','Rows','Columns','Global']]
"""
Explanation: c. Promote a Table to Global Scope.
View all the tables in the casuser caslib. Focus on the specified columns. Notice no table is global scope.
End of explanation
"""
conn.promote(name="cars_update", caslib="casuser")
"""
Explanation: Use the promote CAS action to promote a table to global scope. Global scope allows other users and software like SAS Visual Analtyics to use the in-memory table. Currently, all the in-memory tables are session scope. That is, only this account on this connection to CAS can see the in-memory tables.
In this example, the cars_update table is promoted to global scope in the casuser caslib. This only allows the current account (student) to access this table since it is promoted in the casuser caslib. If a table is promoted to global scope in a shared caslib, other users can see that table.
DEMO: Go to SAS Visual Analyics and see cars_update does not exist outside of this session.
Promote the cars_update in-memory table to global scope
End of explanation
"""
conn.tableInfo(caslib="casuser")['TableInfo'][['Name','Rows','Columns','Global']]
"""
Explanation: Notice only the cars_update table is global.
End of explanation
"""
|
stephenbeckr/convex-optimization-class | Homeworks/APPM5630_HW8_helper.ipynb | mit | import numpy as np
def mySimpleSolver(f,x0,maxIters=13):
x = np.asarray(x0,dtype='float64').copy()
for k in range(maxIters):
fx = f(x)
x -= .001*x # some weird update rule, just to make something interesting happen
return x
# Let's solve this in 1D
f = lambda x : x**2
x = mySimpleSolver( f, 1 )
print(x)
# ... of course this isn't a real solver, so it won't converge to the right answer
# But anyhow, how can we see the history of function values?
"""
Explanation: <a href="https://colab.research.google.com/github/stephenbeckr/convex-optimization-class/blob/master/Homeworks/APPM5630_HW8_helper.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
How to get the history of function values?
For HW 8, you'll make a function and give it to a scipy or other standard solver, and then you want to plot the history of all function evaluations. Sometimes solvers save this for you, sometimes they don't.
Here's one way to save that information in case the solver doesn't save it for you. We'll assume that mySimpleSolver is some builtin solver to a package, and that you cannot modify it easily. So we need to find another way to save the information.
The trick is to essentially use a global variable, but we can make it a bit nicer to at least hiding that variable inside a class.
End of explanation
"""
f = lambda x : x**2
class fcn:
def __init__(self):
self.history = []
def evaluate(self,x):
# Whatever objective function you're implementing
# This also sees objective in the parent workspace,
# so you can just call those
fx = f(x)
self.history.append(fx)
return fx
def reset(self):
self.history = []
objective = fcn()
F = lambda x : objective.evaluate(x) # alternatively, have your class return a function
x = mySimpleSolver( F, 1 )
print(x)
"""
Explanation: One solution
Below is one way to use a class to do this
I'm not a software developer in python, so there may be much nicer ways, but this ought to be good enough. There are many variants and extensions you can do, and of course, if you wanted to record some error function (like if you knew the true answer $x^\star$ and wanted to record $\|x-x^\star\|$ every iteration), you could easily incorporate that too.
End of explanation
"""
objective.history
"""
Explanation: ... so we see that mySimpleSolver didn't have to do anything. Now to get the information, we just ask for the value of objective.history as follows:
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/b99fcf919e5d2f612fcfee22adcfc330/40_autogenerate_metadata.ipynb | bsd-3-clause | from pathlib import Path
import matplotlib.pyplot as plt
import mne
data_dir = Path(mne.datasets.erp_core.data_path())
infile = data_dir / 'ERP-CORE_Subject-001_Task-Flankers_eeg.fif'
raw = mne.io.read_raw(infile, preload=True)
raw.filter(l_freq=0.1, h_freq=40)
raw.plot(start=60)
# extract events
all_events, all_event_id = mne.events_from_annotations(raw)
"""
Explanation: Auto-generating Epochs metadata
This tutorial shows how to auto-generate metadata for ~mne.Epochs, based on
events via mne.epochs.make_metadata.
We are going to use data from the erp-core-dataset (derived from
:footcite:Kappenman2021). This is EEG data from a single participant
performing an active visual task (Eriksen flanker task).
<div class="alert alert-info"><h4>Note</h4><p>If you wish to skip the introductory parts of this tutorial, you may jump
straight to `tut-autogenerate-metadata-ern` after completing the data
import and event creation in the
`tut-autogenerate-metadata-preparation` section.</p></div>
This tutorial is loosely divided into two parts:
We will first focus on producing ERP time-locked to the visual
stimulation, conditional on response correctness and response time in
order to familiarize ourselves with the ~mne.epochs.make_metadata
function.
After that, we will calculate ERPs time-locked to the responses – again,
conditional on response correctness – to visualize the error-related
negativity (ERN), i.e. the ERP component associated with incorrect
behavioral responses.
Preparation
Let's start by reading, filtering, and producing a simple visualization of the
raw data. The data is pretty clean and contains very few blinks, so there's no
need to apply sophisticated preprocessing and data cleaning procedures.
We will also convert the ~mne.Annotations contained in this dataset to events
by calling mne.events_from_annotations.
End of explanation
"""
# metadata for each epoch shall include events from the range: [0.0, 1.5] s,
# i.e. starting with stimulus onset and expanding beyond the end of the epoch
metadata_tmin, metadata_tmax = 0.0, 1.5
# auto-create metadata
# this also returns a new events array and an event_id dictionary. we'll see
# later why this is important
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'])
# let's look at what we got!
metadata
"""
Explanation: Creating metadata from events
The basics of make_metadata
Now it's time to think about the time windows to use for epoching and
metadata generation. It is important to understand that these time windows
need not be the same! That is, the automatically generated metadata might
include information about events from only a fraction of the epochs duration;
or it might include events that occurred well outside a given epoch.
Let us look at a concrete example. In the Flankers task of the ERP CORE
dataset, participants were required to respond to visual stimuli by pressing
a button. We're interested in looking at the visual evoked responses (ERPs)
of trials with correct responses. Assume that based on literature
studies, we decide that responses later than 1500 ms after stimulus onset are
to be considered invalid, because they don't capture the neuronal processes
of interest here. We can approach this in the following way with the help of
mne.epochs.make_metadata:
End of explanation
"""
row_events = ['stimulus/compatible/target_left',
'stimulus/compatible/target_right',
'stimulus/incompatible/target_left',
'stimulus/incompatible/target_right']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events)
metadata
"""
Explanation: Specifying time-locked events
We can see that the generated table has 802 rows, each one corresponding to
an individual event in all_events. The first column, event_name,
contains the name of the respective event around which the metadata of that
specific column was generated – we'll call that the "time-locked event",
because we'll assign it time point zero.
The names of the remaining columns correspond to the event names specified in
the all_event_id dictionary. These columns contain floats; the values
represent the latency of that specific event in seconds, relative to
the time-locked event (the one mentioned in the event_name column).
For events that didn't occur within the given time window, you'll see
a value of NaN, simply indicating that no event latency could be
extracted.
Now, there's a problem here. We want investigate the visual ERPs only,
conditional on responses. But the metadata that was just created contains
one row for every event, including responses. While we could create
epochs for all events, allowing us to pass those metadata, and later subset
the created events, there's a more elegant way to handle things:
~mne.epochs.make_metadata has a row_events parameter that
allows us to specify for which events to create metadata rows, while
still creating columns for all events in the event_id dictionary.
Because the metadata, then, only pertains to a subset of our original events,
it's important to keep the returned events and event_id around for
later use when we're actually going to create our epochs, to ensure that
metadata, events, and event descriptions stay in sync.
End of explanation
"""
keep_first = 'response'
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_first=keep_first)
# visualize response times regardless of side
metadata['response'].plot.hist(bins=50, title='Response Times')
# the "first_response" column contains only "left" and "right" entries, derived
# from the initial event named "response/left" and "response/right"
print(metadata['first_response'])
"""
Explanation: Keeping only the first events of a group
The metadata now contains 400 rows – one per stimulation – and the same
number of columns as before. Great!
We have two types of responses in our data: response/left and
response/right. We would like to map those to "correct" and "incorrect".
To make this easier, we can ask ~mne.epochs.make_metadata to generate an
entirely new column that refers to the first response observed during the
given time interval. This works by passing a subset of the
:term:hierarchical event descriptors (HEDs, inspired by
:footcite:BigdelyShamloEtAl2013) used to name events via the keep_first
parameter. For example, in the case of the HEDs response/left and
response/right, we could pass keep_first='response' to generate a new
column, response, containing the latency of the respective event. This
value pertains only the first (or, in this specific example: the only)
response, regardless of side (left or right). To indicate which event
type (here: response side) was matched, a second column is added:
first_response. The values in this column are the event types without the
string used for matching, as it is already encoded as the column name, i.e.
in our example, we expect it to only contain 'left' and 'right'.
End of explanation
"""
metadata.loc[metadata['stimulus/compatible/target_left'].notna() &
metadata['stimulus/compatible/target_right'].notna(),
:]
"""
Explanation: We're facing a similar issue with the stimulus events, and now there are not
only two, but four different types: stimulus/compatible/target_left,
stimulus/compatible/target_right, stimulus/incompatible/target_left,
and stimulus/incompatible/target_right. Even more, because in the present
paradigm stimuli were presented in rapid succession, sometimes multiple
stimulus events occurred within the 1.5 second time window we're using to
generate our metadata. See for example:
End of explanation
"""
keep_first = ['stimulus', 'response']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_first=keep_first)
# all times of the time-locked events should be zero
assert all(metadata['stimulus'] == 0)
# the values in the new "first_stimulus" and "first_response" columns indicate
# which events were selected via "keep_first"
metadata[['first_stimulus', 'first_response']]
"""
Explanation: This can easily lead to confusion during later stages of processing, so let's
create a column for the first stimulus – which will always be the time-locked
stimulus, as our time interval starts at 0 seconds. We can pass a list of
strings to keep_first.
End of explanation
"""
# left-side stimulation
metadata.loc[metadata['first_stimulus'].isin(['compatible/target_left',
'incompatible/target_left']),
'stimulus_side'] = 'left'
# right-side stimulation
metadata.loc[metadata['first_stimulus'].isin(['compatible/target_right',
'incompatible/target_right']),
'stimulus_side'] = 'right'
# first assume all responses were incorrect, then mark those as correct where
# the stimulation side matches the response side
metadata['response_correct'] = False
metadata.loc[metadata['stimulus_side'] == metadata['first_response'],
'response_correct'] = True
correct_response_count = metadata['response_correct'].sum()
print(f'Correct responses: {correct_response_count}\n'
f'Incorrect responses: {len(metadata) - correct_response_count}')
"""
Explanation: Adding new columns to describe stimulation side and response correctness
Perfect! Now it's time to define which responses were correct and incorrect.
We first add a column encoding the side of stimulation, and then simply
check whether the response matches the stimulation side, and add this result
to another column.
End of explanation
"""
epochs_tmin, epochs_tmax = -0.1, 0.4 # epochs range: [-0.1, 0.4] s
reject = {'eeg': 250e-6} # exclude epochs with strong artifacts
epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax,
events=events, event_id=event_id, metadata=metadata,
reject=reject, preload=True)
"""
Explanation: Creating Epochs with metadata, and visualizing ERPs
It's finally time to create our epochs! We set the metadata directly on
instantiation via the metadata parameter. Also it is important to
remember to pass events and event_id as returned from
~mne.epochs.make_metadata, as we only created metadata for a subset of
our original events by passing row_events. Otherwise, the length
of the metadata and the number of epochs would not match and MNE-Python
would raise an error.
End of explanation
"""
vis_erp = epochs['response_correct'].average()
vis_erp_slow = epochs['(not response_correct) & '
'(response > 0.3)'].average()
fig, ax = plt.subplots(2, figsize=(6, 6))
vis_erp.plot(gfp=True, spatial_colors=True, axes=ax[0])
vis_erp_slow.plot(gfp=True, spatial_colors=True, axes=ax[1])
ax[0].set_title('Visual ERPs – All Correct Responses')
ax[1].set_title('Visual ERPs – Slow Correct Responses')
fig.tight_layout()
fig
"""
Explanation: Lastly, let's visualize the ERPs evoked by the visual stimulation, once for
all trials with correct responses, and once for all trials with correct
responses and a response time greater than 0.5 seconds
(i.e., slow responses).
End of explanation
"""
metadata_tmin, metadata_tmax = -1.5, 0
row_events = ['response/left', 'response/right']
keep_last = ['stimulus', 'response']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_last=keep_last)
"""
Explanation: Aside from the fact that the data for the (much fewer) slow responses looks
noisier – which is entirely to be expected – not much of an ERP difference
can be seen.
Applying the knowledge: visualizing the ERN component
In the following analysis, we will use the same dataset as above, but
we'll time-lock our epochs to the response events, not to the stimulus
onset. Comparing ERPs associated with correct and incorrect behavioral
responses, we should be able to see the error-related negativity (ERN) in
the difference wave.
Since we want to time-lock our analysis to responses, for the automated
metadata generation we'll consider events occurring up to 1500 ms before
the response trigger.
We only wish to consider the last stimulus and response in each time
window: Remember that we're dealing with rapid stimulus presentations in
this paradigm; taking the last response – at time point zero – and the last
stimulus – the one closest to the response – ensures we actually create
the right stimulus-response pairings. We can achieve this by passing the
keep_last parameter, which works exactly like keep_first we got to
know above, only that it keeps the last occurrences of the specified
events and stores them in columns whose names start with last_.
End of explanation
"""
# left-side stimulation
metadata.loc[metadata['last_stimulus'].isin(['compatible/target_left',
'incompatible/target_left']),
'stimulus_side'] = 'left'
# right-side stimulation
metadata.loc[metadata['last_stimulus'].isin(['compatible/target_right',
'incompatible/target_right']),
'stimulus_side'] = 'right'
# first assume all responses were incorrect, then mark those as correct where
# the stimulation side matches the response side
metadata['response_correct'] = False
metadata.loc[metadata['stimulus_side'] == metadata['last_response'],
'response_correct'] = True
metadata
"""
Explanation: Exactly like in the previous example, create new columns stimulus_side
and response_correct.
End of explanation
"""
epochs_tmin, epochs_tmax = -0.6, 0.4
baseline = (-0.4, -0.2)
reject = {'eeg': 250e-6}
epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax,
baseline=baseline, reject=reject,
events=events, event_id=event_id, metadata=metadata,
preload=True)
"""
Explanation: Now it's already time to epoch the data! When deciding upon the epochs
duration for this specific analysis, we need to ensure we see quite a bit of
signal from before and after the motor response. We also must be aware of
the fact that motor-/muscle-related signals will most likely be present
before the response button trigger pulse appears in our data, so the time
period close to the response event should not be used for baseline
correction. But at the same time, we don't want to use a baseline
period that extends too far away from the button event. The following values
seem to work quite well.
End of explanation
"""
epochs.metadata.loc[epochs.metadata['last_stimulus'].isna(), :]
"""
Explanation: Let's do a final sanity check: we want to make sure that in every row, we
actually have a stimulus. We use epochs.metadata (and not metadata)
because when creating the epochs, we passed the reject parameter, and
MNE-Python always ensures that epochs.metadata stays in sync with the
available epochs.
End of explanation
"""
epochs = epochs['last_stimulus.notna()']
"""
Explanation: Bummer! It seems the very first two responses were recorded before the
first stimulus appeared: the values in the stimulus column are None.
There is a very simple way to select only those epochs that do have a
stimulus (i.e., are not None):
End of explanation
"""
resp_erp_correct = epochs['response_correct'].average()
resp_erp_incorrect = epochs['not response_correct'].average()
mne.viz.plot_compare_evokeds({'Correct Response': resp_erp_correct,
'Incorrect Response': resp_erp_incorrect},
picks='FCz', show_sensors=True,
title='ERPs at FCz, time-locked to response')
# topoplot of average field from time 0.0-0.1 s
resp_erp_incorrect.plot_topomap(times=0.05, average=0.05, size=3,
title='Avg. topography 0–100 ms after '
'incorrect responses')
"""
Explanation: Time to calculate the ERPs for correct and incorrect responses.
For visualization, we'll only look at sensor FCz, which is known to show
the ERN nicely in the given paradigm. We'll also create a topoplot to get an
impression of the average scalp potentials measured in the first 100 ms after
an incorrect response.
End of explanation
"""
# difference wave: incorrect minus correct responses
resp_erp_diff = mne.combine_evoked([resp_erp_incorrect, resp_erp_correct],
weights=[1, -1])
fig, ax = plt.subplots()
resp_erp_diff.plot(picks='FCz', axes=ax, selectable=False, show=False)
# make ERP trace bolder
ax.lines[0].set_linewidth(1.5)
# add lines through origin
ax.axhline(0, ls='dotted', lw=0.75, color='gray')
ax.axvline(0, ls=(0, (10, 10)), lw=0.75, color='gray',
label='response trigger')
# mark trough
trough_time_idx = resp_erp_diff.copy().pick('FCz').data.argmin()
trough_time = resp_erp_diff.times[trough_time_idx]
ax.axvline(trough_time, ls=(0, (10, 10)), lw=0.75, color='red',
label='max. negativity')
# legend, axis labels, title
ax.legend(loc='lower left')
ax.set_xlabel('Time (s)', fontweight='bold')
ax.set_ylabel('Amplitude (µV)', fontweight='bold')
ax.set_title('Channel: FCz')
fig.suptitle('ERN (Difference Wave)', fontweight='bold')
fig
"""
Explanation: We can see a strong negative deflection immediately after incorrect
responses, compared to correct responses. The topoplot, too, leaves no doubt:
what we're looking at is, in fact, the ERN.
Some researchers suggest to construct the difference wave between ERPs for
correct and incorrect responses, as it more clearly reveals signal
differences, while ideally also improving the signal-to-noise ratio (under
the assumption that the noise level in "correct" and "incorrect" trials is
similar). Let's do just that and put it into a publication-ready
visualization.
End of explanation
"""
|
kubeflow/code-intelligence | Issue_Embeddings/notebooks/01_AcquireData.ipynb | mit | from mdparse.parser import transform_pre_rules, compose
import pandas as pd
from tqdm import tqdm_notebook
from fastai.text.transform import defaults
"""
Explanation: Running This Notebook
This notebook should be run using the github/mdtok container on DockerHub. The Dockerfile that defines this container is located at the root of this repository named: cpu.Dockerfile
This will ensure that you are able to run this notebook properly as many of the dependencies in this project are rapidly changing. To run this notebook using this container, the commands are:
Get the container: docker pull github\mdtok
Run the container: docker run --it --net=host -v <host_dir>:/ds github/mdtok bash
End of explanation
"""
df = pd.read_csv(f'https://storage.googleapis.com/issue_label_bot/language_model_data/000000000000.csv.gz').sample(5)
df.head(1)
"""
Explanation: Source of Data
The GHArchive project ingests large amounts of data from GitHub repositories. This data is stored in BigQuery for public consumption.
For this project, we gathered over 18 million GitHub issues by executing this query. This query attempts to remove duplicate issues where the content of the issue is roughly the same.
This query results in over 18 Million GitHub issues. The results of this query are split into 100 csv files for free download on the following Google Cloud Storage Bucket:
https://storage.googleapis.com/issue_label_bot/language_model_data/0000000000{00-99}.csv.gz, each file contains approximately 180,000 issues and is 55MB compressed.
Preview Data
Download Sample
The below dataframe illustrates what the format of the raw data looks like:
End of explanation
"""
pd.set_option('max_colwidth', 1000)
df['clean_body'] = ''
for i, b in tqdm_notebook(enumerate(df.body), total=len(df)):
try:
df['clean_body'].iloc[i] = compose(transform_pre_rules+defaults.text_pre_rules)(b)
except:
print(f'error at: {i}')
break
df[['body', 'clean_body']]
"""
Explanation: Illustrate Markdown Parsing Using mdparse
mdparse is a library that parses markdown text and annotates the text with fields with meta-data for deep learning. Below is an illustration of mdparse at work. The parsed and annotated text can be seen in the clean_body field:
The changes are often subtle, but can make a big difference with regard to feature extraction for language modeling.
End of explanation
"""
from fastai.text.transform import ProcessPoolExecutor, partition_by_cores
import numpy as np
from fastai.core import parallel
from itertools import chain
transforms = transform_pre_rules + defaults.text_pre_rules
def process_dict(dfdict, _):
"""process the data, but allow failure."""
t = compose(transforms)
title = dfdict['title']
body = dfdict['body']
try:
text = 'xxxfldtitle '+ t(title) + ' xxxfldbody ' + t(body)
except:
return None
return {'url': dfdict['url'], 'text':text}
def download_data(i, _):
"""Since the data is in 100 chunks already, just do the processing by chunk."""
fn = f'https://storage.googleapis.com/issue_label_bot/language_model_data/{str(i).zfill(12)}.csv.gz'
dicts = [process_dict(d, 0) for d in pd.read_csv(fn).to_dict(orient='rows')]
df = pd.DataFrame([d for d in dicts if d])
df.to_csv(f'/ds/IssuesLanguageModel/data/1_processed_csv/processed_part{str(i).zfill(4)}.csv', index=False)
return df
"""
Explanation: Download And Pre-Process Data
We download the data from GCP and pre-process this data before saving to disk.
End of explanation
"""
dfs = parallel(download_data, list(range(100)), max_workers=31)
dfs_rows = sum([x.shape[0] for x in dfs])
print(f'number of rows in pre-processed data: {dfs_rows:,}')
del dfs
"""
Explanation: Note: The below procedure took over 30 hours on a p3.8xlarge instance on AWS with 32 Cores and 64GB of Memory. You may have to change the number of workers based on your memory and compute constraints.
End of explanation
"""
from pathlib import Path
from random import shuffle
# shuffle the files
p = Path('/ds/IssuesLanguageModel/data/1_processed_csv/')
files = p.ls()
shuffle(files)
# show a preview of files
files[:5]
valid_df = pd.concat([pd.read_csv(f) for f in files[:10]]).dropna().drop_duplicates()
train_df = pd.concat([pd.read_csv(f) for f in files[10:]]).dropna().drop_duplicates()
print(f'rows in train_df:, {train_df.shape[0]:,}')
print(f'rows in valid_df:, {valid_df.shape[0]:,}')
valid_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/valid_df.hdf')
train_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/train_df.hdf')
"""
Explanation: Cached pre-processed data
Since ~19M GitHub issues take a long time to pre-process, the pre-processed files are available here:
https://storage.googleapis.com/issue_label_bot/pre_processed_data/1_processed_csv/processed_part00{00-99}.csv
Partition Data Into Train/Validation Set
Set aside random 10 files (out of 100) as the Validation set
End of explanation
"""
|
tarashor/vibrations | py/notebooks/.ipynb_checkpoints/MatricesForPlaneCorrugatedShells1-checkpoint.ipynb | mit | from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
import matplotlib.pyplot as plt
import sys
sys.path.append("../")
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
# Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here
%config InlineBackend.figure_format='retina'
plt.rcParams['figure.figsize'] = (12, 12)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
# SMALL_SIZE = 42
# MEDIUM_SIZE = 42
# BIGGER_SIZE = 42
# plt.rc('font', size=SMALL_SIZE) # controls default text sizes
# plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
# plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
# plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
# plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
# plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
# plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
init_printing()
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
"""
Explanation: Matrix generation
Init symbols for sympy
End of explanation
"""
R, L, ga, gv = symbols("R L g_a g_v", real = True, positive=True)
a1 = pi / 2 + (L / 2 - alpha1)/R
a2 = 2 * pi * alpha1 / L
x1 = (R + ga * cos(gv * a1)) * cos(a1)
x2 = alpha2
x3 = (R + ga * cos(gv * a1)) * sin(a1)
r = x1*N.i + x2*N.j + x3*N.k
z = ga/R*gv*sin(gv*a1)
w = 1 + ga/R*cos(gv*a1)
dr1x=(z*cos(a1) + w*sin(a1))
dr1z=(z*sin(a1) - w*cos(a1))
r1 = dr1x*N.i + dr1z*N.k
r2 =N.j
mag=sqrt((w)**2+(z)**2)
nx = -dr1z/mag
nz = dr1x/mag
n = nx*N.i+nz*N.k
dnx=nx.diff(alpha1)
dnz=nz.diff(alpha1)
dn= dnx*N.i+dnz*N.k
Ralpha = r+alpha3*n
R1=r1+alpha3*dn
R2=Ralpha.diff(alpha2)
R3=n
R1
R2
R3
"""
Explanation: Cylindrical coordinates
End of explanation
"""
import plot
%aimport plot
x1 = Ralpha.dot(N.i)
x3 = Ralpha.dot(N.k)
alpha1_x = lambdify([R, L, ga, gv, alpha1, alpha3], x1, "numpy")
alpha3_z = lambdify([R, L, ga, gv, alpha1, alpha3], x3, "numpy")
R_num = 1/0.8
L_num = 2
h_num = 0.1
ga_num = h_num/3
gv_num = 20
x1_start = 0
x1_end = L_num
x3_start = -h_num/2
x3_end = h_num/2
def alpha_to_x(a1, a2, a3):
x=alpha1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=alpha3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_init_geometry_2(x1_start, x1_end, x3_start, x3_end, alpha_to_x)
%aimport plot
R3_1=R3.dot(N.i)
R3_3=R3.dot(N.k)
R3_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R3_1, "numpy")
R3_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R3_3, "numpy")
def R3_to_x(a1, a2, a3):
x=R3_1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=R3_3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_vectors(x1_start, x1_end, 0, alpha_to_x, R3_to_x)
%aimport plot
R1_1=R1.dot(N.i)
R1_3=R1.dot(N.k)
R1_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R1_1, "numpy")
R1_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R1_3, "numpy")
def R1_to_x(a1, a2, a3):
x=R1_1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=R1_3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_vectors(x1_start, x1_end, h_num/2, alpha_to_x, R1_to_x)
"""
Explanation: Draw
End of explanation
"""
H1 = sqrt((alpha3*((-(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R) - ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R)*(-ga*gv*(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + ga**2*gv**3*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**3)/((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)**(3/2) + ((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R)/R + ga*gv**2*cos((L/2 - alpha1)/R)*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 - 2*ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2)/sqrt((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)) + (1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R) - ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + (alpha3*(((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R) - ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R)*(-ga*gv*(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + ga**2*gv**3*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**3)/((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)**(3/2) + ((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R)/R + ga*gv**2*sin((L/2 - alpha1)/R)*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + 2*ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R**2)/sqrt((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)) + (1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R) + ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R)**2)
H2=S(1)
H3=S(1)
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
dH[i,0]=H[i].diff(alpha1)
dH[i,1]=H[i].diff(alpha2)
dH[i,2]=H[i].diff(alpha3)
trigsimp(H1)
"""
Explanation: Lame params
End of explanation
"""
%aimport geom_util
G_up = getMetricTensorUpLame(H1, H2, H3)
"""
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
"""
G_down = getMetricTensorDownLame(H1, H2, H3)
"""
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
"""
DIM=3
G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM)
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
G_down_diff[i,i,k]=2*H[i]*dH[i,k]
GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3))
"""
Explanation: Christoffel symbols
End of explanation
"""
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
"""
Explanation: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
"""
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
I = eye(3)
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
%aimport geom_util
u=getUHat3DPlane(alpha1, alpha2, alpha3)
# u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
"""
Explanation: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
"""
P=zeros(12,12)
P[0,0]=H[0]
P[1,0]=dH[0,0]
P[1,1]=H[0]
P[2,0]=dH[0,1]
P[2,2]=H[0]
P[3,0]=dH[0,2]
P[3,3]=H[0]
P[4,4]=H[1]
P[5,4]=dH[1,0]
P[5,5]=H[1]
P[6,4]=dH[1,1]
P[6,6]=H[1]
P[7,4]=dH[1,2]
P[7,7]=H[1]
P[8,8]=H[2]
P[9,8]=dH[2,0]
P[9,9]=H[2]
P[10,8]=dH[2,1]
P[10,10]=H[2]
P[11,8]=dH[2,2]
P[11,11]=H[2]
P=simplify(P)
P
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
row_index = i*3+j
B_P[row_index, row_index] = 1/(H[i]*H[j])
Grad_U_P = simplify(B_P*B*P)
Grad_U_P
StrainL=simplify(E*Grad_U_P)
StrainL
%aimport geom_util
u=getUHatU3Main(alpha1, alpha2, alpha3)
gradup=Grad_U_P*u
E_NLp = E_NonLinear(gradup)*Grad_U_P
simplify(E_NLp)
"""
Explanation: Physical coordinates
$u_i=u_{[i]} H_i$
End of explanation
"""
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
D_p_T = StrainL*T
simplify(D_p_T)
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
simplify(StrainNL)
"""
Explanation: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
"""
L=zeros(12,12)
h=Symbol('h')
p0=1/2-alpha3/h
p1=1/2+alpha3/h
p2=1-(2*alpha3/h)**2
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
D_p_L = StrainL*L
simplify(D_p_L)
h = 0.5
exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8)
p02=integrate(exp, (alpha3, -h/2, h/2))
integral = expand(simplify(p02))
integral
"""
Explanation: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
"""
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
M_p = L.T*M*L*(1+alpha3/R)
mass_matr = simplify(integrate(M_p, (alpha3, -h/2, h/2)))
mass_matr
"""
Explanation: Mass matrix
End of explanation
"""
|
computational-class/cjc | code/pytorch.ipynb | mit | import torch
"""
Explanation: Install
conda install pytorch torchvision -c soumith
Import
End of explanation
"""
x = torch.Tensor(5, 3)
print(x)
"""
Explanation: Tutorial
http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html
http://pytorch.org/tutorials/
End of explanation
"""
import torch
from torch.autograd import Variable
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Variables for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(size_average=False)
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Variable of input data to the Module and it produces
# a Variable of output data.
y_pred = model(x)
# Compute and print loss. We pass Variables containing the predicted and true
# values of y, and the loss function returns a Variable containing the
# loss.
loss = loss_fn(y_pred, y)
if t%50 == 0:
print(t, loss.data[0])
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Variables with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Variable, so
# we can access its data and gradients like we did before.
for param in model.parameters():
param.data -= learning_rate * param.grad.data
"""
Explanation: nn module
http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#nn-module
End of explanation
"""
|