markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
(1b) Pluralize and test
Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.
This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO: Replace <FILL IN> with appropriate code on its first line. Once the <FILL IN> sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests. | # TODO: Replace <FILL IN> with appropriate code
def makePlural(word):
"""Adds an 's' to `word`.
Note:
This is a simple function that only adds an 's'. No attempt is made to follow proper
pluralization rules.
Args:
word (str): A string.
Returns:
str: A string with 's' added to it.
"""
return word + 's'
print makePlural('cat')
# One way of completing the function
def makePlural(word):
return word + 's'
print makePlural('cat')
# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Pluralize and test (1b)
Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(1c) Apply makePlural to the base RDD
Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD. | # TODO: Replace <FILL IN> with appropriate code
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()
# TEST Apply makePlural to the base RDD(1c)
Test.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralRDD') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(1d) Pass a lambda function to map
Let's create the same RDD using a lambda function. | # TODO: Replace <FILL IN> with appropriate code
pluralLambdaRDD = wordsRDD.map(lambda word: word + 's')
print pluralLambdaRDD.collect()
# TEST Pass a lambda function to map (1d)
Test.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralLambdaRDD (1d)') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(1e) Length of each word
Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable. | # TODO: Replace <FILL IN> with appropriate code
pluralLengths = (pluralRDD
.map(lambda word: len(word))
.collect())
print pluralLengths
# TEST Length of each word (1e)
Test.assertEquals(pluralLengths, [4, 9, 4, 4, 4],
'incorrect values for pluralLengths') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(1f) Pair RDDs
The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.
We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD. | # TODO: Replace <FILL IN> with appropriate code
wordPairs = wordsRDD.map(lambda word: (word, 1))
print wordPairs.collect()
# TEST Pair RDDs (1f)
Test.assertEquals(wordPairs.collect(),
[('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)],
'incorrect value for wordPairs') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
Part 2: Counting with pair RDDs
Now, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others.
A naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.
(2a) groupByKey() approach
An approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey():
The operation requires a lot of data movement to move all the values into the appropriate partitions.
The lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker.
Use groupByKey() to generate a pair RDD of type ('word', iterator). | # TODO: Replace <FILL IN> with appropriate code
# Note that groupByKey requires no parameters
wordsGrouped = wordPairs.groupByKey()
for key, value in wordsGrouped.collect():
print '{0}: {1}'.format(key, list(value))
# TEST groupByKey() approach (2a)
Test.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()),
[('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])],
'incorrect value for wordsGrouped') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(2b) Use groupByKey() to obtain the counts
Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.
Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs. | # TODO: Replace <FILL IN> with appropriate code
wordCountsGrouped = wordsGrouped.map(lambda (k,v): (k, sum(v)))
print wordCountsGrouped.collect()
# TEST Use groupByKey() to obtain the counts (2b)
Test.assertEquals(sorted(wordCountsGrouped.collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsGrouped') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(2c) Counting using reduceByKey
A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets. | # TODO: Replace <FILL IN> with appropriate code
# Note that reduceByKey takes in a function that accepts two values and returns a single value
wordCounts = wordPairs.reduceByKey(lambda a,b: a+b)
print wordCounts.collect()
# TEST Counting using reduceByKey (2c)
Test.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCounts') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(2d) All together
The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement. | # TODO: Replace <FILL IN> with appropriate code
wordCountsCollected = (wordsRDD
.map(lambda word: (word, 1))
.reduceByKey(lambda a,b: a+b)
.collect())
print wordCountsCollected
# TEST All together (2d)
Test.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsCollected') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
Part 3: Finding unique words and a mean value
(3a) Unique words
Calculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier. | # TODO: Replace <FILL IN> with appropriate code
uniqueWords = wordsRDD.map(lambda word: (word, 1)).distinct().count()
print uniqueWords
# TEST Unique words (3a)
Test.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(3b) Mean using reduce
Find the mean number of words per unique word in wordCounts.
Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values. | # TODO: Replace <FILL IN> with appropriate code
from operator import add
totalCount = (wordCounts
.map(lambda (a,b): b)
.reduce(add))
average = totalCount / float(wordCounts.distinct().count())
print totalCount
print round(average, 2)
# TEST Mean using reduce (3b)
Test.assertEquals(round(average, 2), 1.67, 'incorrect value of average') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
Part 4: Apply word count to a file
In this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.
(4a) wordCount function
First, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts. | # TODO: Replace <FILL IN> with appropriate code
def wordCount(wordListRDD):
"""Creates a pair RDD with word counts from an RDD of words.
Args:
wordListRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
"""
return (wordListRDD
.map(lambda a : (a,1))
.reduceByKey(lambda a,b: a+b))
print wordCount(wordsRDD).collect()
# TEST wordCount function (4a)
Test.assertEquals(sorted(wordCount(wordsRDD).collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect definition for wordCount function') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:
Words should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word).
All punctuation should be removed.
Any leading or trailing spaces on a line should be removed.
Define the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful. | # TODO: Replace <FILL IN> with appropriate code
import re
def removePunctuation(text):
"""Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
text (str): A string.
Returns:
str: The cleaned up string.
"""
return re.sub("[^a-zA-Z0-9 ]", "", text.strip(" ").lower())
print removePunctuation('Hi, you!')
print removePunctuation(' No under_score!')
# TEST Capitalization and punctuation (4b)
Test.assertEquals(removePunctuation(" The Elephant's 4 cats. "),
'the elephants 4 cats',
'incorrect definition for removePunctuation function') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines. | # Just run this code
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')
fileName = os.path.join(baseDir, inputPath)
shakespeareRDD = (sc
.textFile(fileName, 8)
.map(removePunctuation))
print '\n'.join(shakespeareRDD
.zipWithIndex() # to (line, lineNum)
.map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'
.take(15)) | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(4d) Words from lines
Before we can use the wordcount() function, we have to address two issues with the format of the RDD:
The first issue is that that we need to split each line by its spaces.
The second issue is we need to filter out empty lines.
Apply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be. | # TODO: Replace <FILL IN> with appropriate code
shakespeareWordsRDD = shakespeareRDD.flatMap(lambda a: a.split(" "))
shakespeareWordCount = shakespeareWordsRDD.count()
print shakespeareWordsRDD.top(5)
print shakespeareWordCount
# TEST Words from lines (4d)
# This test allows for leading spaces to be removed either before or after
# punctuation is removed.
Test.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908,
'incorrect value for shakespeareWordCount')
Test.assertEquals(shakespeareWordsRDD.top(5),
[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],
'incorrect value for shakespeareWordsRDD') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(4e) Remove empty elements
The next step is to filter out the empty elements. Remove all entries where the word is ''. | # TODO: Replace <FILL IN> with appropriate code
shakeWordsRDD = shakespeareWordsRDD.filter(lambda word: len(word) > 0)
shakeWordCount = shakeWordsRDD.count()
print shakeWordCount
# TEST Remove empty elements (4e)
Test.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
(4f) Count the words
We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts. | # TODO: Replace <FILL IN> with appropriate code
top15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, lambda (a,b): -b)
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts))
# TEST Count the words (4f)
Test.assertEquals(top15WordsAndCounts,
[(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],
'incorrect value for top15WordsAndCounts') | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | mit |
Let's start by downloading the data: | # Note: Linux bash commands start with a "!" inside those "ipython notebook" cells
DATA_PATH = "data/"
!pwd && ls
os.chdir(DATA_PATH)
!pwd && ls
!python download_dataset.py
!pwd && ls
os.chdir("..")
!pwd && ls
DATASET_PATH = DATA_PATH + "UCI HAR Dataset/"
print("\n" + "Dataset is now located at: " + DATASET_PATH)
| LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Preparing dataset: | TRAIN = "train/"
TEST = "test/"
# Load "X" (the neural network's training and testing inputs)
def load_X(X_signals_paths):
X_signals = []
for signal_type_path in X_signals_paths:
file = open(signal_type_path, 'r')
# Read dataset from disk, dealing with text files' syntax
X_signals.append(
[np.array(serie, dtype=np.float32) for serie in [
row.replace(' ', ' ').strip().split(' ') for row in file
]]
)
file.close()
return np.transpose(np.array(X_signals), (1, 2, 0))
X_train_signals_paths = [
DATASET_PATH + TRAIN + "Inertial Signals/" + signal + "train.txt" for signal in INPUT_SIGNAL_TYPES
]
X_test_signals_paths = [
DATASET_PATH + TEST + "Inertial Signals/" + signal + "test.txt" for signal in INPUT_SIGNAL_TYPES
]
X_train = load_X(X_train_signals_paths)
X_test = load_X(X_test_signals_paths)
# Load "y" (the neural network's training and testing outputs)
def load_y(y_path):
file = open(y_path, 'r')
# Read dataset from disk, dealing with text file's syntax
y_ = np.array(
[elem for elem in [
row.replace(' ', ' ').strip().split(' ') for row in file
]],
dtype=np.int32
)
file.close()
# Substract 1 to each output class for friendly 0-based indexing
return y_ - 1
y_train_path = DATASET_PATH + TRAIN + "y_train.txt"
y_test_path = DATASET_PATH + TEST + "y_test.txt"
y_train = load_y(y_train_path)
y_test = load_y(y_test_path)
| LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Additionnal Parameters:
Here are some core parameter definitions for the training.
For example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps. | # Input Data
training_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie)
test_data_count = len(X_test) # 2947 testing series
n_steps = len(X_train[0]) # 128 timesteps per series
n_input = len(X_train[0][0]) # 9 input parameters per timestep
# LSTM Neural Network's internal structure
n_hidden = 32 # Hidden layer num of features
n_classes = 6 # Total classes (should go up, or should go down)
# Training
learning_rate = 0.0025
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300 # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000 # To show test set accuracy during training
# Some debugging info
print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
| LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Utility functions for training: | def LSTM_RNN(_X, _weights, _biases):
# Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
# Moreover, two LSTM cells are stacked which adds deepness to the neural network.
# Note, some code of this notebook is inspired from an slightly different
# RNN architecture used on another dataset, some of the credits goes to
# "aymericdamien" under the MIT license.
# (NOTE: This step could be greatly optimised by shaping the dataset once
# input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input])
# new shape: (n_steps*batch_size, n_input)
# ReLU activation, thanks to Yu Zhao for adding this improvement here:
_X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(_X, n_steps, 0)
# new shape: n_steps * (batch_size, n_hidden)
# Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
# Get LSTM cell output
outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)
# Get last time step's output feature for a "many-to-one" style classifier,
# as in the image describing RNNs at the top of this page
lstm_last_output = outputs[-1]
# Linear activation
return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']
def extract_batch_size(_train, step, batch_size):
# Function to fetch a "batch_size" amount of data from "(X|y)_train" data.
shape = list(_train.shape)
shape[0] = batch_size
batch_s = np.empty(shape)
for i in range(batch_size):
# Loop index
index = ((step-1)*batch_size + i) % len(_train)
batch_s[i] = _train[index]
return batch_s
def one_hot(y_, n_classes=n_classes):
# Function to encode neural one-hot output labels from number indexes
# e.g.:
# one_hot(y_=[[5], [0], [3]], n_classes=6):
# return [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
return np.eye(n_classes)[np.array(y_, dtype=np.int32)] # Returns FLOATS
| LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Let's get serious and build the neural network: |
# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
# Graph weights
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
pred = LSTM_RNN(x, weights, biases)
# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
| LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Hooray, now train the neural network: | # To keep track of training's performance
test_losses = []
test_accuracies = []
train_losses = []
train_accuracies = []
# Launch the graph
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)
# Perform Training steps with "batch_size" amount of example data at each loop
step = 1
while step * batch_size <= training_iters:
batch_xs = extract_batch_size(X_train, step, batch_size)
batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))
# Fit training using batch data
_, loss, acc = sess.run(
[optimizer, cost, accuracy],
feed_dict={
x: batch_xs,
y: batch_ys
}
)
train_losses.append(loss)
train_accuracies.append(acc)
# Evaluate network only at some steps for faster training:
if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):
# To not spam console, show training accuracy/loss in this "if"
print("Training iter #" + str(step*batch_size) + \
": Batch Loss = " + "{:.6f}".format(loss) + \
", Accuracy = {}".format(acc))
# Evaluation on the test set (no learning made here - just evaluation for diagnosis)
loss, acc = sess.run(
[cost, accuracy],
feed_dict={
x: X_test,
y: one_hot(y_test)
}
)
test_losses.append(loss)
test_accuracies.append(acc)
print("PERFORMANCE ON TEST SET: " + \
"Batch Loss = {}".format(loss) + \
", Accuracy = {}".format(acc))
step += 1
print("Optimization Finished!")
# Accuracy for test data
one_hot_predictions, accuracy, final_loss = sess.run(
[pred, accuracy, cost],
feed_dict={
x: X_test,
y: one_hot(y_test)
}
)
test_losses.append(final_loss)
test_accuracies.append(accuracy)
print("FINAL RESULT: " + \
"Batch Loss = {}".format(final_loss) + \
", Accuracy = {}".format(accuracy))
| LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Training is good, but having visual insight is even better:
Okay, let's plot this simply in the notebook for now. | # (Inline plots: )
%matplotlib inline
font = {
'family' : 'Bitstream Vera Sans',
'weight' : 'bold',
'size' : 18
}
matplotlib.rc('font', **font)
width = 12
height = 12
plt.figure(figsize=(width, height))
indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))
plt.plot(indep_train_axis, np.array(train_losses), "b--", label="Train losses")
plt.plot(indep_train_axis, np.array(train_accuracies), "g--", label="Train accuracies")
indep_test_axis = np.append(
np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),
[training_iters]
)
plt.plot(indep_test_axis, np.array(test_losses), "b-", label="Test losses")
plt.plot(indep_test_axis, np.array(test_accuracies), "g-", label="Test accuracies")
plt.title("Training session's progress over iterations")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training iteration')
plt.show() | LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
And finally, the multi-class confusion matrix and metrics! | # Results
predictions = one_hot_predictions.argmax(1)
print("Testing Accuracy: {}%".format(100*accuracy))
print("")
print("Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted")))
print("Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted")))
print("f1_score: {}%".format(100*metrics.f1_score(y_test, predictions, average="weighted")))
print("")
print("Confusion Matrix:")
confusion_matrix = metrics.confusion_matrix(y_test, predictions)
print(confusion_matrix)
normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100
print("")
print("Confusion matrix (normalised to % of total test data):")
print(normalised_confusion_matrix)
print("Note: training and testing data is not equally distributed amongst classes, ")
print("so it is normal that more than a 6th of the data is correctly classifier in the last category.")
# Plot Results:
width = 12
height = 12
plt.figure(figsize=(width, height))
plt.imshow(
normalised_confusion_matrix,
interpolation='nearest',
cmap=plt.cm.rainbow
)
plt.title("Confusion matrix \n(normalised to % of total test data)")
plt.colorbar()
tick_marks = np.arange(n_classes)
plt.xticks(tick_marks, LABELS, rotation=90)
plt.yticks(tick_marks, LABELS)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
sess.close() | LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Conclusion
Outstandingly, the final accuracy is of 91%! And it can peak to values such as 93.25%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly.
This means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so it amazes me how those predictions are extremely accurate given this small window of context and raw data. I've validated and re-validated that there is no important bug, and the community used and tried this code a lot. (Note: be sure to report something in the issue tab if you find bugs, otherwise Quora, StackOverflow, and other StackExchange sites are the places for asking questions.)
I specially did not expect such good results for guessing between the labels "SITTING" and "STANDING". Those are seemingly almost the same thing from the point of view of a device placed at waist level according to how the dataset was originally gathered. Thought, it is still possible to see a little cluster on the matrix between those classes, which drifts away just a bit from the identity. This is great.
It is also possible to see that there was a slight difficulty in doing the difference between "WALKING", "WALKING_UPSTAIRS" and "WALKING_DOWNSTAIRS". Obviously, those activities are quite similar in terms of movements.
I also tried my code without the gyroscope, using only the 3D accelerometer's 6 features (and not changing the training hyperparameters), and got an accuracy of 87%. In general, gyroscopes consumes more power than accelerometers, so it is preferable to turn them off.
Improvements
In another open-source repository of mine, the accuracy is pushed up to nearly 94% using a special deep LSTM architecture which combines the concepts of bidirectional RNNs, residual connections, and stacked cells. This architecture is also tested on another similar activity dataset. It resembles the nice architecture used in "Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", without an attention mechanism, and with just the encoder part - as a "many to one" architecture instead of a "many to many" to be adapted to the Human Activity Recognition (HAR) problem. I also worked more on the problem and came up with the LARNN, however it's complicated for just a little gain. Thus the current, original activity recognition project is simply better to use for its outstanding simplicity.
If you want to learn more about deep learning, I have also built a list of the learning ressources for deep learning which have revealed to be the most useful to me here.
References
The dataset can be found on the UCI Machine Learning Repository:
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.
Citation
Copyright (c) 2016 Guillaume Chevalier. To cite my code, you can point to the URL of the GitHub repository, for example:
Guillaume Chevalier, LSTMs for Human Activity Recognition, 2016,
https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
My code is available for free and even for private usage for anyone under the MIT License, however I ask to cite for using the code.
Here is the BibTeX citation code:
@misc{chevalier2016lstms,
title={LSTMs for human activity recognition},
author={Chevalier, Guillaume},
year={2016}
}
Extra links
Connect with me
LinkedIn
Twitter
GitHub
Quora
YouTube
Dev/Consulting
Liked this project? Did it help you? Leave a star, fork and share the love!
This activity recognition project has been seen in:
Hacker News 1st page
Awesome TensorFlow
TensorFlow World
And more. | # Let's convert this notebook to a README automatically for the GitHub project's title page:
!jupyter nbconvert --to markdown LSTM.ipynb
!mv LSTM.md README.md | LSTM.ipynb | guillaume-chevalier/LSTM-Human-Activity-Recognition | mit |
Gap robust allan deviation comparison
Compute the GRADEV of a white phase noise. Compares two different
scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV. | def example1():
"""
Compute the GRADEV of a white phase noise. Compares two different
scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.
"""
N = 1000
f = 1
y = np.random.randn(1,N)[0,:]
x = np.linspace(1,len(y),len(y))
x_ax, y_ax, err_l,err_h, ns = allan.gradev(y,f,x)
plt.errorbar(x_ax, y_ax,yerr=[err_l,err_h],label='GRADEV, no gaps')
y[np.floor(0.4*N):np.floor(0.6*N)] = np.NaN # Simulate missing data
x_ax, y_ax, err_l,err_h, ns = allan.gradev(y,f,x)
plt.errorbar(x_ax, y_ax,yerr=[err_l,err_h], label='GRADEV, with gaps')
plt.xscale('log')
plt.yscale('log')
plt.grid()
plt.legend()
plt.xlabel('Tau / s')
plt.ylabel('Overlapping Allan deviation')
plt.show()
example1() | examples/gradev-demo.ipynb | telegraphic/allantools | gpl-3.0 |
White phase noise
Compute the GRADEV of a nonstationary white phase noise. | def example2():
"""
Compute the GRADEV of a nonstationary white phase noise.
"""
N=1000 # number of samples
f = 1 # data samples per second
s=1+5/N*np.arange(0,N)
y=s*np.random.randn(1,N)[0,:]
x = np.linspace(1,len(y),len(y))
x_ax, y_ax, err_l, err_h, ns = allan.gradev(y,f,x)
plt.loglog(x_ax, y_ax,'b.',label="No gaps")
y[int(0.4*N):int(0.6*N,)] = np.NaN # Simulate missing data
x_ax, y_ax, err_l, err, ns = allan.gradev(y,f,x)
plt.loglog(x_ax, y_ax,'g.',label="With gaps")
plt.grid()
plt.legend()
plt.xlabel('Tau / s')
plt.ylabel('Overlapping Allan deviation')
plt.show()
example2() | examples/gradev-demo.ipynb | telegraphic/allantools | gpl-3.0 |
Partial Dependence Plot
During the talk, Youtube: PyData - Random Forests Best Practices for the Business World, one of the best practices that the speaker mentioned when using tree-based models is to check for directional relationships. When using non-linear machine learning algorithms, such as popular tree-based models random forest and gradient boosted trees, it can be hard to understand the relations between predictors and model outcome as they do not give us handy coefficients like linear-based models. For example, in terms of random forest, all we get is the feature importance. Although based on that information, we can tell which feature is significantly influencing the outcome based on the importance calculation, it does not inform us in which direction is the predictor influencing outcome. In this notebook, we'll be exploring Partial dependence plot (PDP), a model agnostic technique that gives us an approximate directional influence for a given feature that was used in the model. Note much of the explanation is "borrowed" from the blog post at the following link, Blog: Introducing PDPbox, this documentation aims to improve upon it by giving a cleaner implementation.
Partial dependence plot (PDP) aims to visualize the marginal effect of a given predictor towards the model outcome by plotting out the average model outcome in terms of different values of the predictor. Let's first gain some intuition of how it works with a made up example. Assume we have a data set that only contains three data points and three features (A, B, C) as shown below.
<img src="img/pd1.png" width="30%" height="30%">
If we wish to see how feature A is influencing the prediction Y, what PDP does is to generate a new data set as follow. (here we assume that feature A only has three unique values: A1, A2, A3)
<img src="img/pd2.png" width="30%" height="30%">
We then perform the prediction as usual with this new set of data. As we can imagine, PDP would generate num_rows * num_grid_points (here, the number of grid point equals the number of unique values of the target feature, more on this later) number of predictions and average them for each unique value of Feature A.
<img src="img/pd3.png" width="30%" height="30%">
In the end, PDP would only plot out the average predictions for each unique value of our target feature.
<img src="img/pd4.png" width="30%" height="30%">
Let's now formalize this idea with some notation. The partial dependence function is defined as:
$$
\begin{align}
\hat{f}{x_S}(x_S) = E{x_C} \left[ f(x_S, x_C) \right]
\end{align}
$$
The term $x_S$ denotes the set of features for which the partial dependence function should be plotting and $x_C$ are all other features that were used in the machine learning model $f$. In other words, if there were $p$ predictors, $S$ is a subset of our $p$ predictors, $S \subset \left{ x_1, x_2, \ldots, x_p \right}$, $C$ would be complementing $S$ such that $S \cup C = \left{x_1, x_2, \ldots, x_p\right}$. The function above is then estimated by calculating averages in the training data, which is also known as Monte Carlo method:
$$
\begin{align}
\hat{f}{x_S}(x_S) = \frac{1}{n} \sum{i=1}^n f(x_S, x_{Ci})
\end{align}
$$
Where $\left{x_{C1}, x_{C2}, \ldots, x_{CN}\right}$ are the values of $X_C$ occurring over all observations in the training data. In other words, in order to calculate the partial dependence of a given variable (or variables), the entire training set must be utilized for every set of joint values. For classification, where the machine learning model outputs probabilities, the partial dependence function displays the probability for a certain class given different values for features $x_s$, a straightforward way to handle multi-class problems is to plot one line per class.
Individual Conditional Expectation (ICE) Plot
As an extension of a PDP, ICE plot visualizes the relationship between a feature and the predicted responses for each observation. While a PDP visualizes the averaged relationship between features and predicted responses, a set of ICE plots disaggregates the averaged information and visualizes an individual dependence for each observation. Hence, instead of only plotting out the average predictions, ICEbox displays all individual lines. (three lines in total in this case)
<img src="img/pd5.png" width="30%" height="30%">
The authors of the Paper: A. Goldstein, A. Kapelner, J. Bleich, E. Pitkin Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation claims with everything displayed in its raw state, any interesting discovers wouldn’t be shielded because of the averaging inherented with PDP. A vivid example from the paper is shown below:
<img src="img/pd6.png" width="50%" height="50%">
In this example, if we only look at the PDP in Figure b, we would think that on average, the feature X2 is not meaningfully associated with the our target response variable Y. However, if judging from the scatter plot showed in Figure a, this conclusion is plainly wrong. Now if we were to plot out the individual estimated conditional expectation curves, everything becomes more obvious.
<img src="img/pd7.png" width="30%" height="30%">
After having an understand of the procedure for PDP and ICE plot, we can observe that:
PDP is a global method, it takes into account all instances and makes a statement about the global relationship of a feature with the predicted outcome.
One of the main advantage of PDP is that it can be used to interpret the result of any "black box" learning methods.
PDP can be quite computationally expensive when the data set becomes large.
Owing to the limitations of computer graphics, and human perception, the size of the subsets $x_S$ must be small (l ≈ 1,2,3). There are of course a large number of such subsets, but only those chosen from among the usually much smaller set of highly relevant predictors are likely to be informative.
PDP can obfuscate relationship that comes from interactions. PDPs show us how the average relationship between feature $x_S$ and $\hat{y}$ looks like. This works well only in cases where the interactions between $x_S$ and the remaining features $x_C$ are weak. In cases where interactions do exist, the ICE plot may give a lot more insight of the underlying relationship.
Implementation
We'll be using the titanic dataset (details of the dataset is listed in the link) to test our implementation. | # we download the training data and store it
# under the `data` directory
data_dir = Path('data')
data_path = data_dir / 'train.csv'
data = pd.read_csv(data_path)
print('dimension: ', data.shape)
print('features: ', data.columns)
data.head()
# some naive feature engineering
data['Age'] = data['Age'].fillna(data['Age'].median())
data['Embarked'] = data['Embarked'].fillna('S')
data['Sex'] = data['Sex'].apply(lambda x: 1 if x == 'male' else 0)
data = pd.get_dummies(data, columns = ['Embarked'])
# features/columns that are used
label = data['Survived']
features = [
'Pclass', 'Sex',
'Age', 'SibSp',
'Parch', 'Fare',
'Embarked_C', 'Embarked_Q', 'Embarked_S']
data = data[features]
X_train, X_test, y_train, y_test = train_test_split(
data, label, test_size = 0.2, random_state = 1234, stratify = label)
# fit a baseline random forest model and show its top 2 most important features
rf = RandomForestClassifier(n_estimators = 50, random_state = 1234)
rf.fit(X_train, y_train)
print('top 2 important features:')
imp_index = np.argsort(rf.feature_importances_)
print(features[imp_index[-1]])
print(features[imp_index[-2]]) | model_selection/partial_dependence/partial_dependence.ipynb | ethen8181/machine-learning | mit |
Aforementioned, tree-based models lists out the top important features, but it is not clear whether they have a positive or negative impact on the result. This is where tools such as partial dependence plots can aid us communicate the results better to others. | from partial_dependence import PartialDependenceExplainer
plt.rcParams['figure.figsize'] = 16, 9
# we specify the feature name and its type to fit the partial dependence
# result, after fitting the result, we can call .plot to visualize it
# since this is a binary classification model, when we call the plot
# method, we tell it which class are we targeting, in this case 1 means
# the passenger did indeed survive (more on centered argument later)
pd_explainer = PartialDependenceExplainer(estimator = rf, verbose = 0)
pd_explainer.fit(data, feature_name = 'Sex', feature_type = 'cat')
pd_explainer.plot(centered = False, target_class = 1)
plt.show() | model_selection/partial_dependence/partial_dependence.ipynb | ethen8181/machine-learning | mit |
Hopefully, we can agree that the partial dependence plot makes intuitive sense, as for the categorical feature Sex, 1 indicates that the passenger was a male. And we know that during the titanic accident, the majority of the survivors were female passenger, thus the plot is telling us male passengers will on average have around 40% chance lower of surviving when compared with female passengers. Also instead of only plotting the "partial dependence" plot, the plot also fills between the standard deviation range. This is essentially borrowing the idea from ICE plot that only plotting the average may obfuscate the relationship.
Centered plot can be useful when we are not interested in seeing the absolute change of a predicted value, but rather the difference in prediction compared to a fixed point of the feature range. | # centered = True is actually the default
pd_explainer.plot(centered = True, target_class = 1)
plt.show() | model_selection/partial_dependence/partial_dependence.ipynb | ethen8181/machine-learning | mit |
We can perform the same process for numerical features such as Fare. We know that more people from the upper class survived, and people from the upper class generally have to pay more Fare to get onboard the titanic. The partial dependence plot below also depicts this trend. | pd_explainer.fit(data, feature_name = 'Fare', feature_type = 'num')
pd_explainer.plot(target_class = 1)
plt.show() | model_selection/partial_dependence/partial_dependence.ipynb | ethen8181/machine-learning | mit |
Решение.
Ясно, что нам нужна модель $y=\theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3$. | n = 9 # Размер выборки
k = 4 # Количество параметров | statistics/hw-13/hw-13.3.ipynb | eshlykov/mipt-day-after-day | unlicense |
Рассмотрим отклик. | Y = numpy.array([3.9, 5.0, 5.7, 6.5, 7.1, 7.6, 7.8, 8.1, 8.4]).reshape(n, 1)
print(Y) | statistics/hw-13/hw-13.3.ipynb | eshlykov/mipt-day-after-day | unlicense |
Рассмотрим регрессор. | x = numpy.array([4.0, 5.2, 6.1, 7.0, 7.9, 8.6, 8.9, 9.5, 9.9])
X = numpy.ones((n, k))
X[:, 1] = x
X[:, 2] = x ** 2
X[:, 3] = x ** 3
print(X) | statistics/hw-13/hw-13.3.ipynb | eshlykov/mipt-day-after-day | unlicense |
Воспользуемся классической формулой для получения оценки. | Theta = inv(X.T @ X) @ X.T @ Y
print(Theta) | statistics/hw-13/hw-13.3.ipynb | eshlykov/mipt-day-after-day | unlicense |
Построим график полученной функции и нанесем точки выборки. | x = numpy.linspace(3.5, 10.4, 1000)
y = Theta[0] + x * Theta[1] + x ** 2 * Theta[2] + x ** 3 * Theta[3]
matplotlib.pyplot.figure(figsize=(20, 8))
matplotlib.pyplot.plot(x, y, color='turquoise', label='Предсказание', linewidth=2.5)
matplotlib.pyplot.scatter(X[:, 1], Y, s=40.0, label='Выборка', color='blue', alpha=0.5)
matplotlib.pyplot.legend()
matplotlib.pyplot.title('Функция $f(x)$')
matplotlib.pyplot.grid()
matplotlib.pyplot.show() | statistics/hw-13/hw-13.3.ipynb | eshlykov/mipt-day-after-day | unlicense |
<a id='loa'></a>
1. Loading and Inspection
Load the demo data | dp = hs.load('./data/02/polymorphic_nanowire.hdf5')
dp | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Set data type, scale intensity range and set calibration | dp.data = dp.data.astype('float64')
dp.data *= 1 / dp.data.max() | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Inspect metadata | dp.metadata | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Plot an interactive virtual image to inspect data | roi = hs.roi.CircleROI(cx=72, cy=72, r_inner=0, r=2)
dp.plot_integrated_intensity(roi=roi, cmap='viridis') | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='pre'></a>
2. Pre-processing
Apply affine transformation to correct for off axis camera geometry | scale_x = 0.995
scale_y = 1.031
offset_x = 0.631
offset_y = -0.351
dp.apply_affine_transformation(np.array([[scale_x, 0, offset_x],
[0, scale_y, offset_y],
[0, 0, 1]])) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Perform difference of gaussian background subtraction with various parameters on one selected diffraction pattern and plot to identify good parameters | from pyxem.utils.expt_utils import investigate_dog_background_removal_interactive
dp_test_area = dp.inav[0, 0]
gauss_stddev_maxs = np.arange(2, 12, 0.2) # min, max, step
gauss_stddev_mins = np.arange(1, 4, 0.2) # min, max, step
investigate_dog_background_removal_interactive(dp_test_area,
gauss_stddev_maxs,
gauss_stddev_mins) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Remove background using difference of gaussians method with parameters identified above | dp = dp.subtract_diffraction_background('difference of gaussians',
min_sigma=2, max_sigma=8,
lazy_result=False) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Perform further adjustments to the data ranges | dp.data -= dp.data.min()
dp.data *= 1 / dp.data.max() | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Set diffraction calibration and scan calibration | dp = pxm.signals.ElectronDiffraction2D(dp) #this is needed because of a bug in the code
dp.set_diffraction_calibration(diffraction_calibration)
dp.set_scan_calibration(10) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='tem'></a>
3. Pattern Matching
Pattern matching generates a database of simulated diffraction patterns and then compares all simulated patterns against each experimental pattern to find the best match
Import generators required for simulation and indexation | from diffsims.libraries.structure_library import StructureLibrary
from diffsims.generators.diffraction_generator import DiffractionGenerator
from diffsims.generators.library_generator import DiffractionLibraryGenerator
from diffsims.generators.zap_map_generator import get_rotation_from_z_to_direction
from diffsims.generators.rotation_list_generators import get_grid_around_beam_direction
from pyxem.generators.indexation_generator import TemplateIndexationGenerator | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
3.1. Define Library of Structures & Orientations
Define the crystal phases to be included in the simulated library | structure_zb = diffpy.structure.loadStructure('./data/02/GaAs_mp-2534_conventional_standard.cif')
structure_wz = diffpy.structure.loadStructure('./data/02/GaAs_mp-8883_conventional_standard.cif') | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Create a basic rotations list. | za110c = get_rotation_from_z_to_direction(structure_zb, [1,1,0])
rot_list_cubic = get_grid_around_beam_direction(beam_rotation=za110c, resolution=1, angular_range=(0,180))
za110h = get_rotation_from_z_to_direction(structure_wz, [1,1,0])
rot_list_hex = get_grid_around_beam_direction(beam_rotation=za110h, resolution=1, angular_range=(0,180)) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Construct a StructureLibrary defining crystal structures and orientations for which diffraction will be simulated | struc_lib = StructureLibrary(['ZB','WZ'],
[structure_zb,structure_wz],
[rot_list_cubic,rot_list_hex]) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='temb'></a>
3.2. Simulate Diffraction for all Structures & Orientations
Define a diffsims DiffractionGenerator with diffraction simulation parameters | diff_gen = DiffractionGenerator(accelerating_voltage=accelarating_voltage) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Initialize a diffsims DiffractionLibraryGenerator | lib_gen = DiffractionLibraryGenerator(diff_gen) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Calulate library of diffraction patterns for all phases and unique orientations | target_pattern_dimension_pixels = dp.axes_manager.signal_shape[0]
half_size = target_pattern_dimension_pixels // 2
reciprocal_radius = diffraction_calibration*(half_size - 1)
diff_lib = lib_gen.get_diffraction_library(struc_lib,
calibration=diffraction_calibration,
reciprocal_radius=reciprocal_radius,
half_shape=(half_size, half_size),
max_excitation_error=1/10,
with_direct_beam=False) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Optionally, save the library for later use. | #diff_lib.pickle_library('./GaAs_cubic_hex.pickle') | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
If saved, the library can be loaded as follows | #from diffsims.libraries.diffraction_library import load_DiffractionLibrary
#diff_lib = load_DiffractionLibrary('./GaAs_cubic_hex.pickle', safety=True) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='temb'></a>
3.3. Pattern Matching Indexation
Initialize TemplateIndexationGenerator with the experimental data and diffraction library and perform correlation, returning the n_largest matches with highest correlation.
<div class="alert alert-block alert-warning"><b>Note:</b> This workflow has been changed from previous version, make sure you have pyxem 0.13.0 or later installed</div> | indexer = TemplateIndexationGenerator(dp, diff_lib)
indexation_results = indexer.correlate(n_largest=3) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Check the solutions via a plotting (can be slow, so we don't run by default) | if False:
indexation_results.plot_best_matching_results_on_signal(dp, diff_lib) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Get crystallographic map from indexation results | crystal_map = indexation_results.to_crystal_map() | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
crystal_map is now a CrystalMap object, which comes from orix, see their documentation for details. Below we lift their code to plot a phase map | from matplotlib import pyplot as plt
from orix import plot
fig, ax = plt.subplots(subplot_kw=dict(projection="plot_map"))
im = ax.plot_map(crystal_map) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='vec'></a>
4. Vector Matching
<div class="alert alert-block alert-danger"><b>Note:</b> This workflow is less well developed than the template matching one, and may well be broken</div>
Vector matching generates a database of vector pairs (magnitues and inter-vector angles) and then compares all theoretical values against each measured diffraction vector pair to find the best match
Import generators required for simulation and indexation | from diffsims.generators.library_generator import VectorLibraryGenerator
from diffsims.libraries.structure_library import StructureLibrary
from diffsims.libraries.vector_library import load_VectorLibrary
from pyxem.generators.indexation_generator import VectorIndexationGenerator
from pyxem.generators.subpixelrefinement_generator import SubpixelrefinementGenerator
from pyxem.signals.diffraction_vectors import DiffractionVectors | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='veca'></a>
4.1. Define Library of Structures
Define crystal structure for which to determine theoretical vector pairs | structure_zb = diffpy.structure.loadStructure('./data/02/GaAs_mp-2534_conventional_standard.cif')
structure_wz = diffpy.structure.loadStructure('./data/02/GaAs_mp-8883_conventional_standard.cif')
structure_library = StructureLibrary(['ZB', 'WZ'],
[structure_zb, structure_wz],
[[], []]) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Initialize VectorLibraryGenerator with structures to be considered | vlib_gen = VectorLibraryGenerator(structure_library) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Determine VectorLibrary with all vectors within given reciprocal radius | reciprocal_radius = diffraction_calibration*(half_size - 1)/2
reciprocal_radius
vec_lib = vlib_gen.get_vector_library(reciprocal_radius) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Optionally, save the library for later use | #vec_lib.pickle_library('./GaAs_cubic_hex_vectors.pickle')
#vec_lib = load_VectorLibrary('./GaAs_cubic_hex_vectors.pickle',safety=True) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
4.2. Find Diffraction Peaks
Tune peak finding parameters interactively | dp.find_peaks(interactive=False) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Perform peak finding on the data with parameters from above | peaks = dp.find_peaks(method='difference_of_gaussian',
min_sigma=0.005,
max_sigma=5.0,
sigma_ratio=2.0,
threshold=0.06,
overlap=0.8,
interactive=False) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
coaxing peaks back into a DiffractionVectors | peaks = DiffractionVectors(peaks).T | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
peaks now contain the 2D positions of the diffraction spots on the detector. The vector matching method works in 3D coordinates, which are found by projecting the detector positions back onto the Ewald sphere. Because the methods that follow are slow, we constrain ourselves to looking at a smaller subset of the data | peaks = peaks.inav[:2,:2]
peaks.calculate_cartesian_coordinates?
peaks.calculate_cartesian_coordinates(accelerating_voltage=accelarating_voltage,
camera_length=camera_length) | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
<a id='vecb'></a>
4.3. Vector Matching Indexation
Initialize VectorIndexationGenerator with the experimental data and vector library and perform indexation using n_peaks_to_index and returning the n_best indexation results.
<div class="alert alert-block alert-danger"><b>Alert: This code no longer works on this example, and may even be completely broken. Caution is advised.</b> </div> | #indexation_generator = VectorIndexationGenerator(peaks, vec_lib)
#indexation_results = indexation_generator.index_vectors(mag_tol=3*diffraction_calibration,
# angle_tol=4, # degree
# index_error_tol=0.2,
# n_peaks_to_index=7,
# n_best=5,
# show_progressbar=True)
#indexation_results.data | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Refine all crystal orientations for improved phase reliability and orientation reliability maps. | #refined_results = indexation_generator.refine_n_best_orientations(indexation_results,
# accelarating_voltage=accelarating_voltage,
# camera_length=camera_length,
# index_error_tol=0.2,
# vary_angles=True,
# vary_scale=True,
# method="leastsq")""" | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Get crystallographic map from optimized indexation results. | #crystal_map = refined_results.get_crystallographic_map() | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
See the objections documentation for further details | #crystal_map? | doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb | pycrystem/pycrystem | gpl-3.0 |
Exercise 1
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$ | # Function for setting up the Chebyshev derivative matrix
def get_cheby_matrix(nx):
cx = np.zeros(nx+1)
x = np.zeros(nx+1)
for ix in range(0,nx+1):
x[ix] = np.cos(np.pi * ix / nx)
cx[0] = 2.
cx[nx] = 2.
cx[1:nx] = 1.
D = np.zeros((nx+1,nx+1))
for i in range(0, nx+1):
for j in range(0, nx+1):
if i==j and i!=0 and i!=nx:
D[i,i]=-x[i]/(2.0*(1.0-x[i]*x[i]))
else:
D[i,j]=(cx[i]*(-1)**(i+j))/(cx[j]*(x[i]-x[j]))
D[0,0] = (2.*nx**2+1.)/6.
D[nx,nx] = -D[0,0]
return D | 05_pseudospectral/cheby_derivative_solution.ipynb | davofis/computational_seismology | gpl-3.0 |
Exercise 2
Calculate the numerical derivative by applying the differentiation matrix $D_{ij}$. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the Chebyshev collocation points. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact? | # Initialize arbitrary test function on Chebyshev collocation points
nx = 200 # Number of grid points
x = np.zeros(nx+1)
for ix in range(0,nx+1):
x[ix] = np.cos(ix * np.pi / nx)
dxmin = min(abs(np.diff(x)))
dxmax = max(abs(np.diff(x)))
# Function example: Gaussian
# Width of Gaussian
s = .2
# Gaussian function (modify!)
f = np.exp(-1/s**2 * x**2)
# Initialize differentiation matrix
D = get_cheby_matrix(nx)
# Analytical derivative
df_ana = -2/s**2 * x * np.exp(-1/s**2 * x**2)
# Calculate numerical derivative using differentiation matrix
df_num = D @ f
# To make the error visible, it is multiply by 10^12
df_err = 1e12*(df_ana - df_num)
# Calculate error between analytical and numerical solution
err = np.sum((df_num - df_ana)**2) / np.sum(df_ana**2) * 100
print('Error: %s' %err) | 05_pseudospectral/cheby_derivative_solution.ipynb | davofis/computational_seismology | gpl-3.0 |
Exercise 3
Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error. | # Plot analytical and numerical derivatives
# ---------------------------------------------------------------
plt.subplot(2,1,1)
plt.plot(x, f, "g", lw = 1.5, label='Gaussian')
plt.legend(loc='upper right', shadow=True)
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.subplot(2,1,2)
plt.plot(x, df_ana, "b", lw = 1.5, label='Analytical')
plt.plot(x, df_num, 'k--', lw = 1.5, label='Numerical')
plt.plot(x, df_err, "r", lw = 1.5, label='Difference')
plt.legend(loc='upper right', shadow=True)
plt.xlabel('$x$')
plt.ylabel('$\partial_x f(x)$')
plt.show() | 05_pseudospectral/cheby_derivative_solution.ipynb | davofis/computational_seismology | gpl-3.0 |
Document Authors
Set document authors | # Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s) | notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Document Contributors
Specify document contributors | # Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s) | notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Document Publication
Specify document publication status | # Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0) | notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3 | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s). | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |