Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
15,900 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
CSAL4243
Step1: Feature Scaling and Mean Normalization
Step2: Initialize Hyper Parameters
Step3: Model/Hypothesis Function
Step5: Cost Function
Step7: Gradient Descent Function
Step8: Run Gradient Descent
Step9: Plot Convergence
Step10: Predict output using trained model | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
import matplotlib as mpl
# read data in pandas frame
dataframe = pd.read_csv('datasets/house_dataset2.csv', encoding='utf-8')
# check data by printing first few rows
dataframe.head()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.set_size_inches(12.5, 7.5)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=dataframe['size'], ys=dataframe['bedrooms'], zs=dataframe['price'])
ax.set_ylabel('bedrooms'); ax.set_xlabel('size'); ax.set_zlabel('price')
# ax.view_init(10, -45)
plt.show()
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 5: Multivariate Regression
Overview
Machine Learning pipeline
Linear Regression with one variable
Model Representation
Vectorize Model
Linear Regression with multiple variables
Cost Function
Gradient Descent
Speed up gradient descent
Feature Scaling
Mean Normalization
Combining Both
Learning Rate $\alpha$
Automatic Covergence Test
Linear Regression with Multiple Variables Example
Read data
Feature Scaling and Mean Normalization
Initialize Hyper Parameters
Model/Hypothesis Function
Cost Function
Gradient Descent
Run Gradient Descent
Plot Convergence
Predict output using trained model
Resources
Credits
<br>
<br>
Classification vs Regression
<img style="float: left;" src="images/05_05.jpg" width=300> <img style="float: center;" src="images/05_04.png" width=400>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
<br>
Vectorize Model
<img style="float: right;" src="images/02_02.png" width=300>
Write model in form of matrix multiplication
$h(x)$ = $X \times \theta$
$X$ and $\theta$ are both matrices
$X = \left[ \begin{array}{cc}
x_1 \
x_2 \
x_3 \
... \
x_{m}
\end{array} \right]$
$h(x)$ = $\theta_0 + \theta_1 x$
= $X \times \theta$ = $\left[ \begin{array}{cc}
1 & x_i
\end{array} \right] \times \left[ \begin{array}{cc}
\theta_0 \
\theta_1
\end{array} \right]$
$h(x)$ = $\left[ \begin{array}{cc}
\theta_0 + \theta_1 x_1 \
\theta_0 + \theta_1 x_2 \
\theta_0 + \theta_1 x_3 \
... \
\theta_0 + \theta_1 x_{m}
\end{array} \right] = \left[ \begin{array}{cc}
1 & x_1 \
1 & x_2 \
1 & x_3 \
... \
1 & x_{m}
\end{array} \right] \times \left[ \begin{array}{cc}
\theta_0 \
\theta_1
\end{array} \right]$
In given dataset $X$ has dimensions $m \times 1$ because of 1 variable
$\theta$ has dimension $2\times 1$
Append a column vector of all 1's to X
New X has dimensions $m\times 2$
$h(x) = X \times \theta$ has dimensions $m\times 1$
<br>
Linear Regression with multiple variables
<img style="float: right;" src="images/02_03.png" width=300>
Model $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
Dimensions of $X$ is $m\times n$
$X = \left[ \begin{array}{cc}
x_1^1 & x_1^2 & .. & x_1^{n} \
x_2^1 & x_2^2 & .. & x_2^{n} \
x_3^1 & x_3^2 & .. & x_3^{n} \
... \
x_{m}^1 & x_{m}^2 & .. & x_{m}^{n}
\end{array} \right]$
$\theta$ has dimension $(n+1)\times 1$
$\theta = \left[ \begin{array}{cc}
\theta_0 \
\theta_1 \
\theta_2 \
... \
\theta_{n} \
\end{array} \right]$
<br>
- Append a column vector of all 1's to X
- Now X has dimensions $m\times (n+1)$
$X = \left[ \begin{array}{cc}
1 & x_1^1 & x_1^2 & .. & x_1^{n} \
1 & x_2^1 & x_2^2 & .. & x_2^{n} \
1 & x_3^1 & x_3^2 & .. & x_3^{n} \
... \
1 & x_{m}^1 & x_{m}^2 & .. & x_{m}^{n}
\end{array} \right]$
where $x_i$ is $i^{th}$ sample, e.g.
$x_2 = [ \begin{array}{cc} 4.9 & 3.0 & 1.4 & 0.2 \end{array}]$
and $x_i^{j}$ is value of feature $j$ in the $i^{th}$ training example e.g. $x_2^3=1.4$
$h(x) = X \times \theta$ has dimensions $m\times 1$
<br>
<br>
<br>
Cost Function
Cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
where $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
<img style="float: center;" src="images/03_02.png", width=300>
<br>
<br>
Gradient Descent
Cost function:
J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Gradient descent equation:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
<br>
Replacing J($\theta$) for each j
$\begin{align} & \text{repeat until convergence:} \; \lbrace \newline \; & \theta_0 := \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^0_{i}\newline \; & \theta_1 := \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^1_{i} \newline \; & \theta_2 := \theta_2 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^2_{i} \newline & \cdots \newline \rbrace \end{align}$
<br>
or more generally
$\begin{align}& \text{repeat until convergence:} \; \lbrace \newline \; & \theta_j := \theta_j - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^j_{i} \; & \text{for j := 0...n}\newline \rbrace\end{align}$
<br>
<img style="float: left;" src="images/03_04.gif">
Speed up gradient descent
Tricks to make gradient descent converge faster to optimal value
Each of our input values in roughly the same range.
$\theta$ will descend quickly on small ranges and slowly on large ranges.
$\theta$ Will oscillate inefficiently down to the optimum when the variables are very uneven.
Aim is to have:
$-1 \le x^i \le 1$
or
$-0.5 \le x^i \le 0.5$
<br>
Feature Scaling
Divide the values of a feature by its range
$x^i = \frac{x^i}{\max(x^i) - \min(x^i)}$
<img style="float: center;" src="images/05_06.png">
Mean Normalization
Bring mean of each feature to zero
$x^i = x^i - \mu^i$
where $\mu^i$ is the mean of feature $i$
Combining both
$x^i = \frac{x^i - \mu^i}{\max(x^i) - \min(x^i)}$
or
$x^i = \frac{x^i - \mu^i}{\rho^i}$
where $\rho^i$ is standard deviation of feature $i$
<br>
Learning Rate $\alpha$
Appropriate $\alpha$ value will speed up gradient descent.
If $\alpha$ is too small: slow convergence.
If $\alpha$ is too large: may not decrease on every iteration and thus may not converge.
<img src="images/05_01.png">
For implementation purpose, try out different values of $\alpha$ e.g. 0.001, 0.003, 0.01, 0.03, 0.1 and plot $J(\theta)$ with respect to iterations.
Choose the one the makes gradient descent coverge quickly.
<br>
Automatic Covergence Test
Plot $J(\theta)$ vs iterations.
$J(\theta)$ should descrease on each iteration.
If $J(\theta)$ decrease by a very small value in an iteration, you may have reached optimal value.
<img style="float: left;" src="images/05_02.png">
<br>
<br>
Linear Regression with Multiple Variables Example
Read data
End of explanation
dataframe.describe()
#Quick visualize data
plt.grid(True)
plt.xlim([-1,5000])
dummy = plt.hist(dataframe["size"],label = 'Size')
dummy = plt.hist(dataframe["bedrooms"],label = 'Bedrooms')
plt.title('Clearly we need feature normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
mean_size = dataframe["size"].mean()
std_size = dataframe["size"].std()
mean_bed = dataframe["bedrooms"].mean()
std_bed = dataframe["bedrooms"].std()
dataframe["size"] = (dataframe["size"] - mean_size)/std_size
dataframe["bedrooms"] = (dataframe["bedrooms"] - mean_bed)/std_bed
dataframe.describe()
# reassign X
# assign X
X = np.array(dataframe[['size','bedrooms']])
X = np.insert(X,0,1,axis=1)
#Quick visualize data
plt.grid(True)
plt.xlim([-5,5])
dummy = plt.hist(dataframe["size"],label = 'Size')
dummy = plt.hist(dataframe["bedrooms"],label = 'Bedrooms')
plt.title('Feature scaled and normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
# assign X and y
X = np.array(dataframe[['size','bedrooms']])
y = np.array(dataframe[['price']])
m = y.size # number of training examples
# insert all 1's column for theta_0
X = np.insert(X,0,1,axis=1)
# initialize theta
# initial_theta = np.zeros((X.shape[1],1))
initial_theta = np.random.rand(X.shape[1],1)
initial_theta
X.shape
initial_theta.shape
Explanation: Feature Scaling and Mean Normalization
End of explanation
iterations = 1500
alpha = 0.1
Explanation: Initialize Hyper Parameters
End of explanation
def h(X, theta): #Linear hypothesis function
hx = np.dot(X,theta)
return hx
Explanation: Model/Hypothesis Function
End of explanation
def computeCost(theta,X,y): #Cost function
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))
#Test that running computeCost with 0's as theta returns 65591548106.45744:
initial_theta = np.zeros((X.shape[1],1)) #(theta is a vector with n rows and 1 columns (if X has n features) )
print (computeCost(initial_theta,X,y))
Explanation: Cost Function
End of explanation
#Actual gradient descent minimizing routine
def gradientDescent(X, theta_start = np.zeros(2)):
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
theta = theta_start
j_history = [] #Used to plot cost as function of iteration
theta_history = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
j_history.append(computeCost(theta,X,y))
theta_history.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, theta_history, j_history
Explanation: Gradient Descent Function
End of explanation
#Actually run gradient descent to get the best-fit theta values
theta, thetahistory, j_history = gradientDescent(X,initial_theta)
theta
Explanation: Run Gradient Descent
End of explanation
plt.plot(j_history)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
Explanation: Plot Convergence
End of explanation
dataframe.head()
x_test = np.array([1,0.130010,-0.22367])
print("$%0.2f" % float(h(x_test,theta)))
hx = h(X, theta)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.set_size_inches(12.5, 7.5)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=dataframe['size'], ys=dataframe['bedrooms'], zs=dataframe['price'])
ax.set_ylabel('bedrooms'); ax.set_xlabel('size'); ax.set_zlabel('price')
# ax.plot(xs=np.array(X[:,0],dtype=object).reshape(-1,1), ys=np.array(X[:,1],dtype=object).reshape(-1,1), zs=hx, color='green')
ax.plot(X[:,0], X[:,1], np.array(hx[:,0]), label='fitted line', color='green')
# ax.view_init(20, -165)
plt.show()
Explanation: Predict output using trained model
End of explanation
<END_TASK> |
15,901 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
chart module
This module relies on pygal library, so the returned charts are instances of pygal.chart. See options at
pygal site
I made a JavaScript 'equivalent'
Step1: Time Series
Step2: Chart series
Step3: Chart seriesByRegion | <ASSISTANT_TASK:>
Python Code:
import ee
from geetools import ui
test_site = ee.Geometry.Point([-71, -42])
test_feat = ee.Feature(test_site, {'name': 'test feature'})
test_featcol = ee.FeatureCollection([
test_feat,
test_feat.buffer(100).set('name', 'buffer 100'),
test_feat.buffer(1000).set('name', 'buffer 1000')
])
Explanation: chart module
This module relies on pygal library, so the returned charts are instances of pygal.chart. See options at
pygal site
I made a JavaScript 'equivalent': https://code.earthengine.google.com/b2922b860b85c1120250794fb82dfda8
End of explanation
years = ee.List([2015, 2016, 2017, 2018])
col = ee.ImageCollection('COPERNICUS/S2').filterBounds(test_site)
def make_time_series(year):
''' make a time series from year's list '''
eefilter = ee.Filter.calendarRange(year, field='year')
filtered = col.filter(eefilter)
return filtered.mean().set('system:time_start', ee.Date.fromYMD(year, 1, 1).millis())
time_series = ee.ImageCollection(years.map(make_time_series))
Explanation: Time Series
End of explanation
chart_ts = ui.chart.Image.series(**{
'imageCollection': time_series,
'region': test_site,
'scale': 10,
'bands': ['B1', 'B2', 'B3'],
# 'xProperty': 'B4', # You can use a band too!
'labels': ['band B1', 'B2 band', 'this is B3']
})
chart_ts.render_widget(width='50%')
Explanation: Chart series
End of explanation
chart_ts_region = ui.chart.Image.seriesByRegion(**{
'imageCollection': time_series,
'reducer': ee.Reducer.median(),
'regions': test_featcol,
'scale': 10,
'band': 'B11',
'seriesProperty': 'name'
})
chart_ts_region.render_widget(height=500)
Explanation: Chart seriesByRegion
End of explanation
<END_TASK> |
15,902 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Truncated potentials for long-range electrostatics
Björn Stenqvist and Mikael Lund, Div. of Theoretical Chemistry, Lund University, Sweden, 2016
In this notebook we test the validity of a variety of Wolf-inspired potentials and compare with a default Ewald summation on a sinple box of water.
Install prerequisites using conda
Step2: Create initial box of water
First create a single water molecule pdb file, then use this to fill up a predefined box using the command line tool packmol. In this example, $N$ and $L$ are taken form the Gromacs spc216.gro file.
Step4: Create OpenMM System classes for a variety of long-range correction schemes
Here the idea is to create all the different setups and add then to the systemlist dictionary. Ewald summation (and PME) can be setup using the default NonbondedForce class, while CustomNonbondedForce must be used for custom pair potentials as used in Wolf, Stenqvist etc.
For more information on how to control force objects and simulation, check,
OpenMM python API.
Lennard-Jones + Ewald at http
Step5: Here we perform the actual MD simulation, incl. minimization of all defined systems
Step6: Oxygen-oxygen radial distribution function
Step7: Average potential energy
Step8: Static dielectric constant
The exact formula for $\epsilon_r$ is not given in the mdtraj documentation, but can be found in the source code on github,
https
Step9: Gromacs analysis
We now generate a topology (top/tpr) file for Gromacs in order to use use their analysis tools, in particular for dipol correlations. These files are identitical for all electrostatic schemes.
NOTE
Step10: Convert trajectory to XTC format and calculate dipolar correlations
See description of the -g option in gmx dipoles, relating to http | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *
from sys import stdout
import matplotlib, matplotlib.pyplot as plt
import pandas as pd
from io import StringIO
import numpy as np, os
matplotlib.rcParams.update({'font.size': 12})
# simulation parameters
nsteps = 50000 # number of MD steps each of 2 fs
nwater = 1000 # number of water molecules
cutoff = 1.2*nanometers # pair potential cutoff
platform = 'OpenCL' # 'OpenCL', 'CUDA', 'CPU'
rho = 216/18.6206**3 # density from gromacs spc216.gro
boxlen = (nwater/rho)**(1/3)
print('box length = ', boxlen*angstrom)
print('half box = ', boxlen*angstrom/2)
Explanation: Truncated potentials for long-range electrostatics
Björn Stenqvist and Mikael Lund, Div. of Theoretical Chemistry, Lund University, Sweden, 2016
In this notebook we test the validity of a variety of Wolf-inspired potentials and compare with a default Ewald summation on a sinple box of water.
Install prerequisites using conda:
bash
$ conda config --add channels omnia
$ conda install -c omnia openmm mdtraj packmol
Also, Gromacs analysis tools (gmx) must be installed for parts of the analysis.
End of explanation
%%writefile hoh.pdb
CRYST1 30.000 30.000 30.000 90.00 90.00 90.00 P 1 1
ATOM 1 OW HOH A 1 27.552 11.051 7.172 1.00 0.00 O
ATOM 2 HW1 HOH A 1 27.900 10.721 8.050 1.00 0.00 H
ATOM 3 HW2 HOH A 1 26.606 11.355 7.281 1.00 0.00 H
END
# write input file for packmol
PACKMOL_INPUT =
tolerance %f
filetype pdb
output %s
# hoh will be put in a box
# defined by the minimum coordinates x, y and z = 0. 0. 0. and maximum
# coordinates box_size box_size box_size That is, they will be put in a cube of side
# box_size (the keyword "inside cube 0. 0. 0. box_size") could be used as well.
structure %s
number %d
inside box 0. 0. 0. %f %f %f
add_box_sides 0.0
end structure
% (2.,'water.pdb','hoh.pdb', nwater, boxlen, boxlen, boxlen)
!echo '$PACKMOL_INPUT' > packmol_input.txt
!packmol < packmol_input.txt > /dev/null
Explanation: Create initial box of water
First create a single water molecule pdb file, then use this to fill up a predefined box using the command line tool packmol. In this example, $N$ and $L$ are taken form the Gromacs spc216.gro file.
End of explanation
elec_to_kJmol = (
constants.elementary_charge**2 *
AVOGADRO_CONSTANT_NA / (4*np.pi*1.0*8.854187817e-12
* constants.farad/constants.meter)).value_in_unit(kilojoule_per_mole*nanometer)
def findForce(system, forcetype, add=True):
Finds a specific force in the system force list - added if not found.
for force in system.getForces():
if isinstance(force, forcetype):
return force
if add==True:
system.addForce(forcetype())
return findForce(system, forcetype)
return None
def make_qpotential_system(topology, Rc=0.9*nanometers, moments=1000):
''' return a q-potential system (TO '''
ff = ForceField('spce-custom.xml') # this will create a CustomNonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=CutoffPeriodic,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
def qPochhammerSymbol( Rc, moments ):
if isinstance(Rc, Quantity):
Rc = Rc / nanometer # strip unit
qP = 1.0
r = np.linspace(0, Rc, 5000)
for i in range( moments ):
qP *= (1 - (r/Rc)**(i+1) )
return qP
qP = Continuous1DFunction( qPochhammerSymbol(Rc, moments), 0*nanometers, Rc)
nonbonded = findForce(system, CustomNonbondedForce)
nonbonded.addTabulatedFunction( 'qP', qP ) # 'qP(r)' can now be used in energy function
nonbonded.addGlobalParameter( 'f', elec_to_kJmol ) # convert to kJ/mol
nonbonded.setEnergyFunction(
'f * charge1 * charge2 * qP(r)/r' \
' + 4 * epsilon * ( (sigma/r)^12 - (sigma/r)^6 )' \
' ; sigma = 0.5 * ( sigma1+sigma2 ); epsilon = sqrt( epsilon1*epsilon2 )'
)
print('qpot')
print(' periodic boundaries: ', nonbonded.usesPeriodicBoundaryConditions())
print(' switching function: ', nonbonded.getUseSwitchingFunction())
print(' long-range correction:', nonbonded.getUseLongRangeCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
print(' energy function: ', nonbonded.getEnergyFunction())
return system
def make_grekiss_system(topology, Rc=0.9*nanometers):
''' return a grekis system '''
ff = ForceField('spce-custom.xml') # this will create a CustomNonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=CutoffPeriodic,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
nonbonded = findForce(system, CustomNonbondedForce)
nonbonded.addGlobalParameter( 'Rc', Rc ) # 'Rc' can now be used in energy function
nonbonded.addGlobalParameter( 'f', elec_to_kJmol ) # 'lB' bjerrum length in nm and kJ/mol
nonbonded.setEnergyFunction(
'f * charge1 * charge2 * 1/r * ( 1 - 1.75*r/Rc + 5.25*(r/Rc)^5 - 7*(r/Rc)^6 + 2.5*(r/Rc)^7 )' \
' + 4 * epsilon * ( (sigma/r)^12 - (sigma/r)^6 )' \
' ; sigma = 0.5 * (sigma1+sigma2); epsilon = sqrt( epsilon1*epsilon2 )'
)
print('fanourgakis')
print(' periodic boundaries: ', nonbonded.usesPeriodicBoundaryConditions())
print(' switching function: ', nonbonded.getUseSwitchingFunction())
print(' long-range correction:', nonbonded.getUseLongRangeCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
print(' energy function: ', nonbonded.getEnergyFunction())
return system
def make_yonezawa_system(topology, Rc=0.9*nanometers):
''' return a yonezawa system '''
ff = ForceField('spce-custom.xml') # this will create a CustomNonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=CutoffPeriodic,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
nonbonded = findForce(system, CustomNonbondedForce)
nonbonded.addGlobalParameter( 'Rc', Rc ) # 'Rc' can now be used in energy function
nonbonded.addGlobalParameter( 'f', elec_to_kJmol ) # 'lB' bjerrum length in nm and kJ/mol
nonbonded.setEnergyFunction(
'f * charge1 * charge2 * ( 1/r - 1/Rc + 1/Rc^2 * (r-Rc) )' \
' + 4 * epsilon * ( (sigma/r)^12 - (sigma/r)^6 )' \
' ; sigma = 0.5 * ( sigma1+sigma2 ); epsilon = sqrt( epsilon1 * epsilon2 )'
)
print('yonezawa')
print(' periodic boundaries: ', nonbonded.usesPeriodicBoundaryConditions())
print(' switching function: ', nonbonded.getUseSwitchingFunction())
print(' long-range correction:', nonbonded.getUseLongRangeCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
print(' energy function: ', nonbonded.getEnergyFunction())
return system
def make_ewald_system(topology, Rc=0.9*nanometers, method=Ewald, ljcorr=False):
''' returns an Ewald system '''
ff = ForceField('spce.xml') # this will create a NonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=method,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
nonbonded = findForce(system, NonbondedForce)
nonbonded.setUseDispersionCorrection( ljcorr )
print('ewald')
print(' err. tolerance: ', nonbonded.getEwaldErrorTolerance())
print(' LJ switching function:', nonbonded.getUseSwitchingFunction())
print(' LJ correction: ', nonbonded.getUseDispersionCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
return system
pdb = PDBFile('water.pdb')
# here we define our list of system incl. plotting properties. All files from simulation
# and analysis will be prefixed with 'qpot.', 'ewald.', etc.
systemlist = {
'yonezawa': {
'system': make_yonezawa_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'blue', 'label': 'Yonezawa'
},
'grekiss': {
'system': make_grekiss_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'green', 'label': 'Fanourgakis'
},
'qpot' : {
'system': make_qpotential_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'red', 'label': '$q$ ($n=\infty$)'
},
'qpot2' : {
'system': make_qpotential_system( pdb.topology, Rc=cutoff, moments=2 ), 'npt': False,
'run': False, 'color': 'orange', 'label': '$q$ ($n=2$)'
},
'qpot3' : {
'system': make_qpotential_system( pdb.topology, Rc=cutoff, moments=3 ), 'npt': False,
'run': False, 'color': 'magenta', 'label': '$q$ ($n=3$)'
},
'ewald': {
'system': make_ewald_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'black', 'label': 'Ewald'
}
}
Explanation: Create OpenMM System classes for a variety of long-range correction schemes
Here the idea is to create all the different setups and add then to the systemlist dictionary. Ewald summation (and PME) can be setup using the default NonbondedForce class, while CustomNonbondedForce must be used for custom pair potentials as used in Wolf, Stenqvist etc.
For more information on how to control force objects and simulation, check,
OpenMM python API.
Lennard-Jones + Ewald at http://docs.openmm.org/7.0.0/userguide/theory.html#nonbondedforce
Long range correction beyond cut-off: http://docs.openmm.org/7.0.0/userguide/theory.html#customnonbondedforce
TODO: Smoothen LJ part at $R_c$. Use switching function?
End of explanation
for name, prop in systemlist.items(): # loop over systems
if prop['run']:
print(name)
if prop['npt']: # NVT -> NPT ensemble ?
print(' adding barostat.')
barostat = MonteCarloBarostat(1.0*bar, 298.15*kelvin, 25)
system.addForce(barostat)
integrator = LangevinIntegrator( 298.15*kelvin, 1.0/picoseconds, 2*femtoseconds )
integrator.setConstraintTolerance(0.00001)
_platform = Platform.getPlatformByName( platform ) # CPU, OpenCL
sim = Simulation(pdb.topology, prop['system'], integrator, _platform)
sim.context.setPositions(pdb.positions) # set particle positions
if os.path.isfile( name+'.chk' ):
with open( name+'.chk', 'rb') as f:
print(' loading restart file.')
sim.context.loadCheckpoint( f.read() )
else:
print(' minimizing energy...')
sim.reporters.clear()
%time sim.minimizeEnergy( tolerance=50*kilojoule/mole, maxIterations=1000 )
sim.context.setVelocitiesToTemperature( 298.15*kelvin ) # initial random velocities
print(' running Production...')
sim.reporters.clear()
sim.reporters.append( DCDReporter( name+'.dcd', 1000) )
sim.reporters.append( StateDataReporter(name+'.energy', 1000, step=True, potentialEnergy=True,
temperature=True, density=True) )
sim.reporters.append( StateDataReporter(stdout, 1000, step=True, potentialEnergy=True,
temperature=True, density=True, separator='\t',
progress=True,
totalSteps = nsteps) )
%time sim.step( nsteps )
with open( name+'.chk', 'wb') as f:
print(' saving restart file.')
f.write( sim.context.createCheckpoint() )
# save final configuration to PDB file
# todo: something is fishy here... water molecules are outside the box
# in pdb although the trajectory looks fine. Å->nm problem?
positions = sim.context.getState( getPositions=True ).getPositions()
PDBFile.writeFile(sim.topology, positions, open( name+'.pdb', 'w'))
print()
Explanation: Here we perform the actual MD simulation, incl. minimization of all defined systems
End of explanation
import mdtraj as md
for name, prop in systemlist.items():
print(name)
traj = md.load(name+'.dcd', top=name+'.pdb')
sel = traj.top.select('name O')
OOpairs = traj.top.select_pairs('name O', 'name O')
rdf = md.compute_rdf( traj, pairs=OOpairs, bin_width=0.005, r_range=[0.2, boxlen/2/10] )
plt.plot( rdf[0], rdf[1], label=prop['label'], color=prop['color'], lw=1)
plt.xlim(0.2, 0.8)
plt.xlabel(r'$r$/nm')
plt.ylabel(r'$g(r)$')
plt.legend(loc=0, frameon=False, fontsize='medium')
Explanation: Oxygen-oxygen radial distribution function
End of explanation
for name, prop in systemlist.items():
step, Upot, temp, density = np.loadtxt(name+'.energy', skiprows=1, delimiter=',', unpack=True)
Upot = Upot / pdb.topology.getNumResidues() # -> energy per water molecule
plt.plot( step, Upot, label=prop['label'], color=prop['color'], lw=0.5 )
plt.plot( [0,step.max()], [Upot.mean(), Upot.mean()], color=prop['color'], ls='--', lw=0.5 )
prop['energy'] = Upot.mean()
plt.xlabel(r'steps')
plt.ylabel(r'$U_{pot}/N$ (kJ/mol)')
plt.legend(loc=1, frameon=True, fontsize='small')
Explanation: Average potential energy
End of explanation
def getChargeVector( force, stripunit=True ):
''' Extract particle charge array from force object if available
Note that this function assumes that charge is located at parameter
position 0 which is default in NonbondedForce while it can be anywhere
in CustomNonbondedForce, following the order in which
parameters were added in python/xml.
'''
if isinstance(force, (NonbondedForce, CustomNonbondedForce) ):
if 'getParticleParameters' in dir(force):
chargelist = []
for i in range( force.getNumParticles() ):
charge = force.getParticleParameters( i )[ 0 ]
if stripunit:
if isinstance(charge, Quantity):
charge = charge / elementary_charge # strip unit
chargelist.append( charge )
return chargelist
return None
for name, prop in systemlist.items():
for force in prop['system'].getForces():
if isinstance(force, (NonbondedForce, CustomNonbondedForce) ):
charges = getChargeVector( force )
traj = md.load(name+'.dcd', top=name+'.pdb')
l = len(traj)
l2= int(l/2)
eps12= md.static_dielectric( traj[1:l], charges, temperature=298.15) # all
eps1 = md.static_dielectric( traj[1:l2], charges, temperature=298.15) # 1st half
eps2 = md.static_dielectric( traj[l2:l], charges, temperature=298.15) # 2nd half
prop['diel'] = eps12
print(name, ': diel. const.', eps12, eps1, eps2)
pd.DataFrame(systemlist).T
Explanation: Static dielectric constant
The exact formula for $\epsilon_r$ is not given in the mdtraj documentation, but can be found in the source code on github,
https://github.com/mdtraj/mdtraj/blob/master/mdtraj/geometry/thermodynamic_properties.py#L90
End of explanation
%%bash
rm -fR
echo -n "6\n" | gmx -quiet -nobackup pdb2gmx -f water.pdb -o water.gro -p water.top -n water.ndx -water spce
gmx -quiet -nobackup grompp -f water.mdp -c water.gro -p water.top -o water.tpr -maxwarn 10
Explanation: Gromacs analysis
We now generate a topology (top/tpr) file for Gromacs in order to use use their analysis tools, in particular for dipol correlations. These files are identitical for all electrostatic schemes.
NOTE: Depending on the size of the water box, grompp may complain that the cut-off is too short. Fix this by increasing the verlet tolerance. It will not matter for the analysis, but consider making a slightly larger system.
End of explanation
for name, prop in systemlist.items():
print(name)
traj = md.load(name+'.dcd', top=name+'.pdb')
traj.save_xtc(name+'.xtc')
!echo -n "0\n" | gmx -quiet -nobackup dipoles -f $name'.xtc' -s water.tpr -temp 298.15 -g $name'.gkr.xvg'
print()
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(8, 7))
for name, prop in systemlist.items():
r, G, cos, hOO, gOO, energy = np.loadtxt(name+'.gkr.xvg', skiprows=27, unpack=True)
ax1.plot(r, G, label=prop['label'], color=prop['color'], lw=1 )
ax1.set_xlabel('$r$/nm')
ax1.set_ylabel('$G_k(r)$')
ax2.plot(r, cos, label=prop['label'], color=prop['color'], lw=1 )
ax2.set_xlabel('$r$/nm')
ax2.set_ylabel('cosine something...')
ax2.set_xlim(0.2, 0.8)
ax2.set_ylim(-0.1, 0.55)
ax3.plot(r, hOO, label=prop['label'], color=prop['color'], lw=1 )
ax3.set_xlabel('$r$/nm')
ax3.set_ylabel('hOO')
ax3.set_xlim(0.2, 0.8)
ax4.plot(r, gOO, label=prop['label'], color=prop['color'], lw=1 )
ax4.set_xlabel('$r$/nm')
ax4.set_ylabel('gOO')
ax4.set_xlim(0.2, 0.8)
ax2.legend(loc=0, frameon=False, fontsize='medium')
plt.tight_layout()
Explanation: Convert trajectory to XTC format and calculate dipolar correlations
See description of the -g option in gmx dipoles, relating to http://manual.gromacs.org/programs/gmx-dipoles.html (Nymand/Linse).
End of explanation
<END_TASK> |
15,903 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
A simple method for segmenting rectilinear crop plots
The following outlines the method in https
Step1: Read in a test image subset. Replace with your own if required parameters will need to be adjusted, needless to say complete segmentation is not guaranteed - will be dependent upon your image.
Step2: The classical Canny edge detection.
Experimental GUI below, we follow Canny's original rule of thumb with the low threshold being divided by 2.
From this we can derive the parameters for the crop segmentation.
Step3: Phase congruency edge detection
See https
Step4: Segment the plots
The function houghseg segments the plots. In this case, phase congruency is 'cleaner' edge detector, so it is used.
We are interested in the larger rectangular plots here so we use the min_area param to constrain this. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import cv2
from geospatial_learn import raster
from geospatial_learn.utilities import do_phasecong, houghseg
from math import ceil
import matplotlib.pyplot as plt
from skimage.color import rgb2gray, label2rgb
from skimage.feature import canny
from skimage.exposure import rescale_intensity
Explanation: A simple method for segmenting rectilinear crop plots
The following outlines the method in https://www.frontiersin.org/articles/10.3389/fpls.2020.591886/full employing simple image processing techniques to extract crop plots. This is easily implemented using scikit-image or open cv, but available as a functionality within this library.
The method employs edge detection, hough line detection then segment reduction to arrive at the plots of interest. The merit is the simplicity, though it is somewhat inflexible being limited to rectilinear plots.
Needless to say complete segmentation is not guaranteed - will be dependent upon your image.
End of explanation
inRas = 'figures/weetestorig.tif'
img = raster.raster2array(inRas, bands=[1,2,3])
# for testing below
gray = rgb2gray(img)
plt.imshow(img)
plt.show()
Explanation: Read in a test image subset. Replace with your own if required parameters will need to be adjusted, needless to say complete segmentation is not guaranteed - will be dependent upon your image.
End of explanation
def icanny(high_threshold, *args, **kwargs): #...do it
inIm = gray#.astype(np.float)
low_threshold = high_threshold / 2
edge = canny(inIm, low_threshold=low_threshold, high_threshold=high_threshold, *args, **kwargs)
# Comment the first 2 lines if you want more space
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
plt.imshow(edge)
plt.show()
# return edge
from ipywidgets import widgets
cTester = widgets.interact(icanny,
#k=widgets.IntSlider(min=3, max=100, step=2, continuous_update=False),
sigma=widgets.IntSlider(min=0, max=100, step=1, continuous_update=False),
#low_threshold=widgets.IntSlider(min=0, max=255, step=1, continuous_update=False),
high_threshold=widgets.FloatSlider(min=0, max=1, step=0.01, continuous_update=False))
Explanation: The classical Canny edge detection.
Experimental GUI below, we follow Canny's original rule of thumb with the low threshold being divided by 2.
From this we can derive the parameters for the crop segmentation.
End of explanation
def iphase(*args, **kwargs):
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
edge = do_phasecong(gray, *args, **kwargs)
plt.imshow(edge)
plt.show()
from ipywidgets import widgets
cTester = widgets.interact(iphase,
sigma=widgets.IntSlider(min=0, max=50, step=1, continuous_update=False),
low_t=widgets.IntSlider(min=0, max=256, step=1, continuous_update=False),
hi_t=widgets.IntSlider(min=0, max=256, step=1, continuous_update=False))
Explanation: Phase congruency edge detection
See https://link.springer.com/article/10.1007/s004260000024
An alternative method for edges, which may or may not prove more useful albeit at a reduced speed with the python version ustilised here.
End of explanation
outShp = 'mytest.shp'
segments = houghseg(inRas, outShp, edge='phase', sigma=4, min_area=4)
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
plt.imshow(segments, cmap='gray')
Explanation: Segment the plots
The function houghseg segments the plots. In this case, phase congruency is 'cleaner' edge detector, so it is used.
We are interested in the larger rectangular plots here so we use the min_area param to constrain this.
End of explanation
<END_TASK> |
15,904 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basic notebook to look @ convergence of a 2D region in an FES. It will actually call sum hills with the stride you set in cell one , graph the FES and put the regions of convergence there
Step1: Graph the final FES and plot the two squares on top of it
Step2: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation
Step3: Below this is all testing of different read-in options
Step4: Profiling speed of different read in options | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from matplotlib.patches import Rectangle
# define all variables for convergence script
# these will pass to the bash magic below used to call plumed sum_hills
dir="MetaD_converge" #where the intermediate fes will be stored
hills="other/HILLS" #your HILLS file from the simulation
finalfes='other/fes.dat' #the final fes.dat file
stride=1000
kT=8.314e-3*300 #throughout we convert to kcal, but the HILLS are assumed to be in GROMACS units (kJ)
## here is where you set the boxes to define convergence regions
C1=[-1.5,1.0] #center of box 1
C2=[1.0,-.5]
edge1=1.0 #edge of box1
edge2=1.0
%%bash -s "$dir" "$hills" "$stride" "$kT"
# calling sum hills and output to devnul
HILLSFILE=HILLS
rm -rf $1
mkdir $1
cp $2 $1
cd $1
plumed sum_hills --hills $HILLSFILE --kt $4 --stride $3 >& /dev/null
Explanation: Basic notebook to look @ convergence of a 2D region in an FES. It will actually call sum hills with the stride you set in cell one , graph the FES and put the regions of convergence there
End of explanation
%matplotlib inline
#read the data in from a text file
fesdata = np.genfromtxt(finalfes,comments='#');
fesdata = fesdata[:,0:3]
#what was your grid size? this calculates it
dim=int(np.sqrt(np.size(fesdata)/3))
#some post-processing to be compatible with contourf
X=np.reshape(fesdata[:,0],[dim,dim],order="F") #order F was 20% faster than A/C
Y=np.reshape(fesdata[:,1],[dim,dim],order="F")
Z=np.reshape((fesdata[:,2]-np.min(fesdata[:,2]))/4.184,[dim,dim],order="F") #convert to kcal/mol
#what spacing do you want? assume units are in kJ/mol
spacer=1 #this means 1kcal/mol spacing
lines=20
levels=np.linspace(0,lines*spacer,num=(lines+1),endpoint=True)
fig=plt.figure(figsize=(8,6))
axes = fig.add_subplot(111)
xlabel='$\Phi$'
ylabel='$\Psi$'
plt.contourf(X, Y, Z, levels, cmap=plt.cm.bone,)
plt.colorbar()
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
currentAxis = plt.gca()
currentAxis.add_patch(Rectangle((C1[0]-edge1/2, C1[1]-edge1/2), edge1, edge1,facecolor='none',edgecolor='yellow',linewidth='3'))
currentAxis.add_patch(Rectangle((C2[0]-edge2/2, C2[1]-edge2/2), edge2, edge2,facecolor='none',edgecolor='yellow',linewidth='3'))
plt.show()
Explanation: Graph the final FES and plot the two squares on top of it
End of explanation
def diffNP(file):
#read the data in from a text file
# note - this is very slow
fesdata = np.genfromtxt(file,comments='#');
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184 #output in kcal
return diff
def diff(file):
kT=8.314e-3*300
A=0.0
B=0.0
f = open(file, 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
return diff
diffvec=None
rootdir = '/Users/jpfaendt/Learning/Python/ALA2_MetaD/MetaD_converge'
i=0
diffvec=np.zeros((1,2))
#the variable func defines which function you are going to call to read in your data files fes_*.dat
#func=diffNP uses the numpy read in (SLOW)
#func=diff streams in data from a text file
#to experience the differnece , uncomment out the print statements and run each way
func=diff
for infile in glob.glob( os.path.join(rootdir, 'fes_?.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_??.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_???.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
fig = plt.figure(figsize=(6,6))
axes = fig.add_subplot(111)
xlabel='time (generic)'
ylabel='diff (A-B) (kcal/mol)'
axes.plot(diffvec[:,0],diffvec[:,1])
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
plt.show()
Explanation: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation:
$F_A = -k_BT * ln \sum_A exp\left(-F_{Ai}/k_BT\right) $
Don't forget that this is formally a free-energy plus some trivial constant but that the constant is equal for both regions $A$ and $B$ so that you will obtain the same free-energy difference irrespective of the reference point.
On the other hand, it doesn't make much sense to just use the arbitrary nubmers coming from sum_hills, which are related only to the amount of aggregate bias produced in your simulation. This is why we reference the lowest point to zero on the contour plots.
I left both functions in as a teaching tool to show how slow np.genfromtext is
End of explanation
##
#read the data in from a text file using genfrom txt
fesdata = np.genfromtxt('MetaD_converge/fes_1.dat',comments='#');
kT=8.314e-3*300
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
##
#read the data in from a text file using read in commands
kT=8.314e-3*300
A=0.0
B=0.0
f = open('MetaD_converge/fes_1.dat', 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
Explanation: Below this is all testing of different read-in options:
End of explanation
file='MetaD/fes.dat'
%timeit diffNP(file)
%timeit diff(file)
Explanation: Profiling speed of different read in options:
End of explanation
<END_TASK> |
15,905 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization
Step6: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heaviy documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code
Step9: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code
Step10: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
Step11: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
Step12: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note
Step13: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
Step14: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
Step15: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
Step16: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
Step17: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
Step18: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
Step19: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
Step20: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
Step21: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
Step22: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
Step23: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
Step24: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
Step25: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
Step26: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure
Step27: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
Step29: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note
Step31: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points
Step32: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
Step33: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note | <ASSISTANT_TASK:>
Python Code:
# Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.
Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.
For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network.
Likewise, the output of layer 2 can be thought of as the input to a single layer network, consistng only of layer 3.
When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).
Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.
Benefits of Batch Normalization<a id="benefits"></a>
Batch normalization optimizes network training. It has been shown to have several benefits:
1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall.
2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train.
3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.
4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.
5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.
6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network.
7. May give better results overall – Some tests seem to show batch normalization actually improves the train.ing results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.
Batch Normalization in TensorFlow<a id="implementation_1"></a>
This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow.
The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.
End of explanation
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
Explanation: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heaviy documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code:
This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.
It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
End of explanation
def plot_training_accuracies(*args, **kwargs):
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code:
1. Layers with batch normalization do not include a bias term.
2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)
3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.
4. We add the normalization before calling the activation function.
In addition to that code, the training step is wrapped in the following with statement:
python
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.
Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.
Batch Normalization Demos<a id='demos'></a>
This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier.
We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.
Code to support testing
The following two functions support the demos we run in the notebook.
The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.
The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
End of explanation
train_and_test(False, 0.01, tf.nn.relu)
Explanation: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
End of explanation
train_and_test(False, 0.01, tf.nn.sigmoid)
Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)
In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
train_and_test(False, 1, tf.nn.relu)
Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
train_and_test(False, 1, tf.nn.relu)
Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
End of explanation
train_and_test(False, 1, tf.nn.sigmoid)
Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
End of explanation
train_and_test(False, 2, tf.nn.relu)
Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
train_and_test(False, 2, tf.nn.sigmoid)
Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
End of explanation
train_and_test(True, 0.01, tf.nn.relu)
Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
train_and_test(True, 0.01, tf.nn.sigmoid)
Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
train_and_test(True, 1, tf.nn.relu)
Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
End of explanation
train_and_test(True, 1, tf.nn.sigmoid)
Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
End of explanation
train_and_test(True, 2, tf.nn.relu)
Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
End of explanation
train_and_test(True, 2, tf.nn.sigmoid)
Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
End of explanation
train_and_test(True, 1, tf.nn.relu)
Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure: Batch Normalization Doesn't Fix Everything
Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.
This section includes two examples that show runs when batch normalization did not help at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
End of explanation
train_and_test(True, 2, tf.nn.relu)
Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
End of explanation
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.
End of explanation
def batch_norm_test(test_training_accuracy):
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).
End of explanation
batch_norm_test(True)
Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
End of explanation
batch_norm_test(False)
Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.
End of explanation
<END_TASK> |
15,906 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Training DeepMind's Atari DQN with Chimp
Load Chimp modules
Step1: Load Python packages
Step2: Set training parameters
Step3: You may want to set a smaller number of iterations (like 100000) - for illustration purposes. We set the GPU option to True, turn it off if your machine does not support it. Be sure to have the requested rom in the indicated directory.
Step4: Now we initialize the simulator first, as we need to use some information it provides - e.g., number of actions.
Step5: Here we define the convolutional network, in a format required by Chainer - the deep learning library we use.
Step6: We then initialize the learner + chainer backend, replay memory, and agent modules.
Step7: Now let the agent train.
Step8: Visualizing results
First, let's visualize the training and evaluation results.
Step9: Evaluating the best policy
Let's load the network that collected the highest reward per game episode | <ASSISTANT_TASK:>
Python Code:
from chimp.memories import ReplayMemoryHDF5
from chimp.learners.dqn_learner import DQNLearner
from chimp.learners.chainer_backend import ChainerBackend
from chimp.simulators.atari import AtariSimulator
from chimp.agents import DQNAgent
Explanation: Training DeepMind's Atari DQN with Chimp
Load Chimp modules
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import Chain
import os
import pandas as ps
Explanation: Load Python packages
End of explanation
settings = {
# agent settings
'batch_size' : 32,
'print_every' : 5000,
'save_dir' : './results_atari',
'iterations' : 5000000,
'eval_iterations' : 5000,
'eval_every' : 50000,
'save_every' : 50000,
'initial_exploration' : 50000,
'epsilon_decay' : 0.000005, # subtract from epsilon every step
'eval_epsilon' : 0.05, # epsilon used in evaluation, 0 means no random actions
'epsilon' : 1.0, # Initial exploratoin rate
'learn_freq' : 4,
'history_sizes' : (4, 0, 0), # sizes of histories to use as nn inputs (o, a, r)
'model_dims' : (84,84),
# Atari settings
'rom' : "Breakout.bin",
'rom_dir' : './roms',
'pad' : 15, # padding parameter - for image cropping - only along the length of the image, to obtain a square
'action_history' : True,
# simulator settings
'viz' : True,
'viz_cropped' : False,
# replay memory settings
'memory_size' : 1000000, # size of replay memory
'frame_skip' : 4, # number of frames to skip
# learner settings
'learning_rate' : 0.00025,
'decay_rate' : 0.95, # decay rate for RMSprop, otherwise not used
'discount' : 0.99, # discount rate for RL
'clip_err' : False, # value to clip loss gradients to
'clip_reward' : 1, # value to clip reward values to
'target_net_update' : 10000, # update the update-generating target net every fixed number of iterations
'optim_name' : 'RMSprop', # currently supports "RMSprop", "ADADELTA", "ADAM" and "SGD"'
'gpu' : True,
'reward_rescale': False,
# general
'seed_general' : 1723,
'seed_simulator' : 5632,
'seed_agent' : 9826,
'seed_memory' : 7563
}
Explanation: Set training parameters
End of explanation
# set random seed
np.random.seed(settings["seed_general"])
random.seed(settings["seed_general"])
Explanation: You may want to set a smaller number of iterations (like 100000) - for illustration purposes. We set the GPU option to True, turn it off if your machine does not support it. Be sure to have the requested rom in the indicated directory.
End of explanation
simulator = AtariSimulator(settings)
Explanation: Now we initialize the simulator first, as we need to use some information it provides - e.g., number of actions.
End of explanation
#Define the network
class Convolution(Chain):
def __init__(self):
super(Convolution, self).__init__(
l1=F.Convolution2D(settings['history_sizes'][0], 32, ksize=8, stride=4, nobias=False, wscale=np.sqrt(2)),
l2=F.Convolution2D(32, 64, ksize=4, stride=2, nobias=False, wscale=np.sqrt(2)),
l3=F.Convolution2D(64, 64, ksize=3, stride=1, nobias=False, wscale=np.sqrt(2)),
l4=F.Linear(3136, 512, wscale = np.sqrt(2)),
l5=F.Linear(512, simulator.n_actions, wscale = np.sqrt(2)),
)
def __call__(self, ohist, ahist):
if len(ohist.data.shape) < 4:
ohist = F.reshape(ohist,(1,4,84,84))
h1 = F.relu(self.l1(ohist/255.0))
h2 = F.relu(self.l2(h1))
h3 = F.relu(self.l3(h2))
h4 = F.relu(self.l4(h3))
output = self.l5(h4)
return output
net = Convolution()
Explanation: Here we define the convolutional network, in a format required by Chainer - the deep learning library we use.
End of explanation
backend = ChainerBackend(settings)
backend.set_net(net)
learner = DQNLearner(settings, backend)
memory = ReplayMemoryHDF5(settings)
agent = DQNAgent(learner, memory, simulator, settings)
Explanation: We then initialize the learner + chainer backend, replay memory, and agent modules.
End of explanation
agent.train()
Explanation: Now let the agent train.
End of explanation
train_stats = ps.read_csv('%s/training_history.csv' % settings['save_dir'],delimiter=' ',header=None)
train_stats.columns = ['Iteration','MSE Loss','Average Q-Value']
eval_stats = ps.read_csv('%s/evaluation_history.csv' % settings['save_dir'],delimiter=' ',header=None)
eval_stats.columns = ['Iteration','Total Reward','Reward per Episode']
plt.plot(eval_stats['Iteration'], eval_stats['Reward per Episode'])
plt.xlabel("Iteration")
plt.ylabel("Avg. Reward per Episode")
plt.grid(True)
#plt.savefig(settings['save_dir'] + '_' + "evaluation_reward.svg", bbox_inches='tight')
plt.show()
plt.close()
plt.plot(train_stats['Iteration'], train_stats['Average Q-Value'])
plt.xlabel("Iteration")
plt.ylabel("Avg. Q-Values")
plt.grid(True)
#plt.savefig(settings['save_dir'] + '_' + "training_q_values.svg", bbox_inches='tight')
plt.show()
plt.close()
Explanation: Visualizing results
First, let's visualize the training and evaluation results.
End of explanation
best_iteration_index = np.argmax(eval_stats['Reward per Episode'])
best_iteration = str(int(eval_stats['Iteration'][best_iteration_index]))
best_iteration
agent.learner.load_net(settings['save_dir']+'/net_' + best_iteration + '.p')
r_tot, r_per_episode, runtime = agent.simulate(10000, epsilon=0.05, viz=True)
r_per_episode
Explanation: Evaluating the best policy
Let's load the network that collected the highest reward per game episode
End of explanation
<END_TASK> |
15,907 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the 'License');
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Pairwise Regression Fairness
We will be training a linear scoring function $f(x) = w^\top x$ where $x \in \mathbb{R}^d$ is the input feature vector. Our goal is to train the regression model subject to pairwise fairness constraints.
Specifically, for the regression model $f$, we denote
Step2: Evaluation Metrics
We will need functions to convert labeled data into paired data.
Step3: We will also need functions to evaluate the pairwise error rates for a linear model.
Step4: Create Linear Model
We then write a function to create the linear scoring model.
Step5: Formulate Optimization Problem
We are ready to formulate the constrained optimization problem using the TFCO library.
Step6: Train Model
The following function then trains the linear model by solving the above constrained optimization problem. We first provide a training function with minibatch gradient updates. There are three types of pairwise fairness criterion we handle (specified by 'constraint_type'), and assign the (pos_group, neg_group) pairs to compare accordingly.
Step7: Summarize and Plot Results
Having trained a model, we will need functions to summarize the various evaluation metrics.
Step8: Experimental Results
We now run experiments with two types of pairwise fairness criteria
Step9: (1) Marginal Equal Opportunity
For a scoring model $f
Step10: (2) Pairwise Equal Opportunity
Recall that we denote
$err_{i,j}(f)$ as the pairwise error over example pairs where the higher label example is from group $i$, and the lower label example is from group $j$.
$$
err_{i, j}(f) ~=~ \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i, ~grp(x') = j\big]
$$
We first constrain only the cross-group errors, highlighted below.
<br>
<table border='1' bordercolor='black'>
<tr >
<td bgcolor='white'> </td>
<td bgcolor='white'> </td>
<td bgcolor='white' colspan=2 align=center><b>Negative</b></td>
</tr>
<tr>
<td bgcolor='white'></td>
<td bgcolor='white'></td>
<td>Group 0</td>
<td>Group 1</td>
</tr>
<tr>
<td bgcolor='white' rowspan=2><b>Positive</b></td>
<td bgcolor='white'>Group 0</td>
<td bgcolor='white'>$err_{0,0}$</td>
<td bgcolor='white'>$\mathbf{err_{0,1}}$</td>
</tr>
<tr>
<td>Group 1</td>
<td bgcolor='white'>$\mathbf{err_{1,0}}$</td>
<td bgcolor='white'>$err_{1,1}$</td>
</tr>
</table>
<br>
The optimization problem we solve constraints the cross-group pairwise errors to be similar | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
import sys
from sklearn import model_selection
import tensorflow as tf
!pip install git+https://github.com/google-research/tensorflow_constrained_optimization
import tensorflow_constrained_optimization as tfco
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the 'License');
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an 'AS IS' BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This colab contains TensorFlow code for implementing the constrained optimization methods presented in the paper:
Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang, 'Pairwise Fairness for Ranking and Regression', AAAI 2020. [<a href='https://arxiv.org/pdf/1906.05330.pdf'>link</a>]
First, let's install and import the relevant libraries.
End of explanation
# We will divide the data into 25 minibatches and refer to them as 'queries'.
num_queries = 25
# List of column names in the dataset.
column_names = ["state", "county", "community", "communityname", "fold", "population", "householdsize", "racepctblack", "racePctWhite", "racePctAsian", "racePctHisp", "agePct12t21", "agePct12t29", "agePct16t24", "agePct65up", "numbUrban", "pctUrban", "medIncome", "pctWWage", "pctWFarmSelf", "pctWInvInc", "pctWSocSec", "pctWPubAsst", "pctWRetire", "medFamInc", "perCapInc", "whitePerCap", "blackPerCap", "indianPerCap", "AsianPerCap", "OtherPerCap", "HispPerCap", "NumUnderPov", "PctPopUnderPov", "PctLess9thGrade", "PctNotHSGrad", "PctBSorMore", "PctUnemployed", "PctEmploy", "PctEmplManu", "PctEmplProfServ", "PctOccupManu", "PctOccupMgmtProf", "MalePctDivorce", "MalePctNevMarr", "FemalePctDiv", "TotalPctDiv", "PersPerFam", "PctFam2Par", "PctKids2Par", "PctYoungKids2Par", "PctTeen2Par", "PctWorkMomYoungKids", "PctWorkMom", "NumIlleg", "PctIlleg", "NumImmig", "PctImmigRecent", "PctImmigRec5", "PctImmigRec8", "PctImmigRec10", "PctRecentImmig", "PctRecImmig5", "PctRecImmig8", "PctRecImmig10", "PctSpeakEnglOnly", "PctNotSpeakEnglWell", "PctLargHouseFam", "PctLargHouseOccup", "PersPerOccupHous", "PersPerOwnOccHous", "PersPerRentOccHous", "PctPersOwnOccup", "PctPersDenseHous", "PctHousLess3BR", "MedNumBR", "HousVacant", "PctHousOccup", "PctHousOwnOcc", "PctVacantBoarded", "PctVacMore6Mos", "MedYrHousBuilt", "PctHousNoPhone", "PctWOFullPlumb", "OwnOccLowQuart", "OwnOccMedVal", "OwnOccHiQuart", "RentLowQ", "RentMedian", "RentHighQ", "MedRent", "MedRentPctHousInc", "MedOwnCostPctInc", "MedOwnCostPctIncNoMtg", "NumInShelters", "NumStreet", "PctForeignBorn", "PctBornSameState", "PctSameHouse85", "PctSameCity85", "PctSameState85", "LemasSwornFT", "LemasSwFTPerPop", "LemasSwFTFieldOps", "LemasSwFTFieldPerPop", "LemasTotalReq", "LemasTotReqPerPop", "PolicReqPerOffic", "PolicPerPop", "RacialMatchCommPol", "PctPolicWhite", "PctPolicBlack", "PctPolicHisp", "PctPolicAsian", "PctPolicMinor", "OfficAssgnDrugUnits", "NumKindsDrugsSeiz", "PolicAveOTWorked", "LandArea", "PopDens", "PctUsePubTrans", "PolicCars", "PolicOperBudg", "LemasPctPolicOnPatr", "LemasGangUnitDeploy", "LemasPctOfficDrugUn", "PolicBudgPerPop", "ViolentCrimesPerPop"]
dataset_url = "http://archive.ics.uci.edu/ml/machine-learning-databases/communities/communities.data"
# Read dataset from the UCI web repository and assign column names.
data_df = pd.read_csv(dataset_url, sep=",", names=column_names,
na_values="?")
# Make sure that there are no missing values in the "ViolentCrimesPerPop" column.
assert(not data_df["ViolentCrimesPerPop"].isna().any())
# Real-valued label: "ViolentCrimesPerPop".
labels_df = data_df["ViolentCrimesPerPop"]
# Now that we have assigned binary labels,
# we drop the "ViolentCrimesPerPop" column from the data frame.
data_df.drop(columns="ViolentCrimesPerPop", inplace=True)
# Group features.
race_black_70_percentile = data_df["racepctblack"].quantile(q=0.7)
groups_df = (data_df["racepctblack"] >= race_black_70_percentile)
# Drop categorical features.
data_df.drop(columns=["state", "county", "community", "communityname", "fold"],
inplace=True)
# Handle missing features.
feature_names = data_df.columns
for feature_name in feature_names:
missing_rows = data_df[feature_name].isna() # Which rows have missing values?
if missing_rows.any(): # Check if at least one row has a missing value.
data_df[feature_name].fillna(0.0, inplace=True) # Fill NaN with 0.
missing_rows.rename(feature_name + "_is_missing", inplace=True)
data_df = data_df.join(missing_rows) # Append boolean "is_missing" feature.
labels = labels_df.values.astype(np.float32)
groups = groups_df.values.astype(np.float32)
features = data_df.values.astype(np.float32)
# Set random seed so that the results are reproducible.
np.random.seed(123456)
# We randomly divide the examples into 'num_queries' queries.
queries = np.random.randint(0, num_queries, size=features.shape[0])
# Train and test indices.
train_indices, test_indices = model_selection.train_test_split(
range(features.shape[0]), test_size=0.4)
# Train features, labels and protected groups.
train_set = {
'features': features[train_indices, :],
'labels': labels[train_indices],
'groups': groups[train_indices],
'queries': queries[train_indices],
'dimension': features.shape[-1],
'num_queries': num_queries
}
# Test features, labels and protected groups.
test_set = {
'features': features[test_indices, :],
'labels': labels[test_indices],
'groups': groups[test_indices],
'queries': queries[test_indices],
'dimension': features.shape[-1],
'num_queries': num_queries
}
Explanation: Pairwise Regression Fairness
We will be training a linear scoring function $f(x) = w^\top x$ where $x \in \mathbb{R}^d$ is the input feature vector. Our goal is to train the regression model subject to pairwise fairness constraints.
Specifically, for the regression model $f$, we denote:
- $sqerr(f)$ as the squared error for model $f$.
$$
sqerr(f) = \mathbf{E}\big[\big(f(x) - y\big)^2\big]
$$
$err_{i,j}(f)$ as the pairwise error over example pairs where the higher label example is from group $i$, and the lower label example is from group $j$.
$$
err_{i, j}(f) = \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i, ~grp(x') = j\big]
$$
<br>
We then wish to solve the following constrained problem:
$$min_f\; sqerr(f)$$
$$\text{ s.t. } |err_{i,j}(f) - err_{k,\ell}(f)| \leq \epsilon \;\;\; \forall ((i,j), (k,\ell)) \in \mathcal{G},$$
where $\mathcal{G}$ contains the pairs we are interested in constraining.
Load Communities & Crime Data
We will use the benchmark Communities and Crimes dataset from the UCI Machine Learning repository for our illustration. This dataset contains various demographic and racial distribution details (aggregated from census and law enforcement data sources) about different communities in the US, along with the per capita crime rate in each commmunity. Our goal is to predict the crime rate for a community, a regression problem. We consider communities where the percentage of black population is above the 70-th percentile as the protected group.
End of explanation
def pair_high_low_docs(data):
# Returns a DataFrame of pairs of larger-smaller labeled regression examples
# given in DataFrame.
# For all pairs of docs, and remove rows that are not needed.
pos_docs = data.copy()
neg_docs = data.copy()
# Include a merge key.
pos_docs.insert(0, "merge_key", 0)
neg_docs.insert(0, "merge_key", 0)
# Merge docs and drop merge key and label column.
pairs = pos_docs.merge(neg_docs, on="merge_key", how="outer",
suffixes=("_pos", "_neg"))
# Only retain rows where label_pos > label_neg.
pairs = pairs[pairs.label_pos > pairs.label_neg]
# Drop merge_key.
pairs.drop(columns=["merge_key"], inplace=True)
return pairs
def convert_labeled_to_paired_data(data_dict, index=None):
# Forms pairs of examples from each batch/query.
# Converts data arrays to pandas DataFrame with required column names and
# makes a call to convert_df_to_pairs and returns a dictionary.
features = data_dict['features']
labels = data_dict['labels']
groups = data_dict['groups']
queries = data_dict['queries']
if index is not None:
data_df = pd.DataFrame(features[queries == index, :])
data_df = data_df.assign(label=pd.DataFrame(labels[queries == index]))
data_df = data_df.assign(group=pd.DataFrame(groups[queries == index]))
data_df = data_df.assign(query_id=pd.DataFrame(queries[queries == index]))
else:
data_df = pd.DataFrame(features)
data_df = data_df.assign(label=pd.DataFrame(labels))
data_df = data_df.assign(group=pd.DataFrame(groups))
data_df = data_df.assign(query_id=pd.DataFrame(queries))
# Forms pairs of positive-negative docs for each query in given DataFrame
# if the DataFrame has a query_id column. Otherise forms pairs from all rows
# of the DataFrame.
data_pairs = data_df.groupby('query_id').apply(pair_high_low_docs)
# Create groups ndarray.
pos_groups = data_pairs['group_pos'].values.reshape(-1, 1)
neg_groups = data_pairs['group_neg'].values.reshape(-1, 1)
group_pairs = np.concatenate((pos_groups, neg_groups), axis=1)
# Create queries ndarray.
query_pairs = data_pairs['query_id_pos'].values.reshape(-1,)
# Create features ndarray.
feature_names = data_df.columns
feature_names = feature_names.drop(['query_id', 'label'])
feature_names = feature_names.drop(['group'])
pos_features = data_pairs[[str(s) + '_pos' for s in feature_names]].values
pos_features = pos_features.reshape(-1, 1, len(feature_names))
neg_features = data_pairs[[str(s) + '_neg' for s in feature_names]].values
neg_features = neg_features.reshape(-1, 1, len(feature_names))
feature_pairs = np.concatenate((pos_features, neg_features), axis=1)
# Paired data dict.
paired_data = {
'feature_pairs': feature_pairs,
'group_pairs': group_pairs,
'query_pairs': query_pairs,
'features': features,
'labels': labels,
'queries': queries,
'dimension': data_dict['dimension'],
'num_queries': data_dict['num_queries']
}
return paired_data
Explanation: Evaluation Metrics
We will need functions to convert labeled data into paired data.
End of explanation
def get_mask(groups, pos_group, neg_group=None):
# Returns a boolean mask selecting positive-negative document pairs where
# the protected group for the positive document is pos_group and
# the protected group for the negative document (if specified) is neg_group.
# Repeat group membership positive docs as many times as negative docs.
mask_pos = groups[:, 0] == pos_group
if neg_group is None:
return mask_pos
else:
mask_neg = groups[:, 1] == neg_group
return mask_pos & mask_neg
def mean_squared_error(model, dataset):
# Returns mean squared error for Keras model on dataset.
scores = model.predict(dataset['features'])
labels = dataset['labels']
return np.mean((scores - labels) ** 2)
def group_error_rate(model, dataset, pos_group, neg_group=None):
# Returns error rate for Keras model on data set, considering only document
# pairs where the protected group for the positive document is pos_group, and
# the protected group for the negative document (if specified) is neg_group.
d = dataset['dimension']
scores0 = model.predict(dataset['feature_pairs'][:, 0, :].reshape(-1, d))
scores1 = model.predict(dataset['feature_pairs'][:, 1, :].reshape(-1, d))
mask = get_mask(dataset['group_pairs'], pos_group, neg_group)
diff = scores0 - scores1
diff = diff[mask > 0].reshape((-1))
return np.mean(diff < 0)
Explanation: We will also need functions to evaluate the pairwise error rates for a linear model.
End of explanation
def create_scoring_model(feature_pairs, features, dimension):
# Returns a linear Keras scoring model, and returns a nullary function
# returning predictions on the features.
# Linear scoring model with no hidden layers.
layers = []
# Input layer takes `dimension` inputs.
layers.append(tf.keras.Input(shape=(dimension,)))
layers.append(tf.keras.layers.Dense(1))
scoring_model = tf.keras.Sequential(layers)
# Create a nullary function that returns applies the linear model to the
# features and returns the tensor with the prediction differences on pairs.
def prediction_diffs():
scores0 = scoring_model(feature_pairs()[:, 0, :].reshape(-1, dimension))
scores1 = scoring_model(feature_pairs()[:, 1, :].reshape(-1, dimension))
return scores0 - scores1
# Create a nullary function that returns the predictions on individual
# examples.
predictions = lambda: scoring_model(features())
return scoring_model, prediction_diffs, predictions
Explanation: Create Linear Model
We then write a function to create the linear scoring model.
End of explanation
def group_mask_fn(groups, pos_group, neg_group=None):
# Returns a nullary function returning group mask.
group_mask = lambda: np.reshape(
get_mask(groups(), pos_group, neg_group), (-1))
return group_mask
def formulate_problem(
feature_pairs, group_pairs, features, labels, dimension,
constraint_groups=[], constraint_slack=None):
# Formulates a constrained problem that optimizes the squared error for a linear
# model on the specified dataset, subject to pairwise fairness constraints
# specified by the constraint_groups and the constraint_slack.
#
# Args:
# feature_pairs: Nullary function returning paired features
# group_pairs: Nullary function returning paired groups
# features: Nullary function returning features
# labels: Nullary function returning labels
# dimension: Input dimension for scoring model
# constraint_groups: List containing tuples of the form
# ((pos_group0, neg_group0), (pos_group1, neg_group1)), specifying the
# group memberships for the document pairs to compare in the constraints.
# constraint_slack: slackness '\epsilon' allowed in the constraints.
# Returns:
# A RateMinimizationProblem object, and a Keras scoring model.
# Create linear scoring model: we get back a Keras model and a nullary
# function returning predictions on the features.
scoring_model, prediction_diffs, predictions = create_scoring_model(
feature_pairs, features, dimension)
# Context for the optimization objective.
context = tfco.rate_context(prediction_diffs)
# Squared loss objective.
squared_loss = lambda: tf.reduce_mean((predictions() - labels()) ** 2)
# Constraint set.
constraint_set = []
# Context for the constraints.
for ((pos_group0, neg_group0), (pos_group1, neg_group1)) in constraint_groups:
# Context for group 0.
group_mask0 = group_mask_fn(group_pairs, pos_group0, neg_group0)
context_group0 = context.subset(group_mask0)
# Context for group 1.
group_mask1 = group_mask_fn(group_pairs, pos_group1, neg_group1)
context_group1 = context.subset(group_mask1)
# Add constraints to constraint set.
constraint_set.append(
tfco.negative_prediction_rate(context_group0) <= (
tfco.negative_prediction_rate(context_group1) + constraint_slack))
constraint_set.append(
tfco.negative_prediction_rate(context_group1) <= (
tfco.negative_prediction_rate(context_group0) + constraint_slack))
# Formulate constrained minimization problem.
problem = tfco.RateMinimizationProblem(
tfco.wrap_rate(squared_loss), constraint_set)
return problem, scoring_model
Explanation: Formulate Optimization Problem
We are ready to formulate the constrained optimization problem using the TFCO library.
End of explanation
def train_model(train_set, params):
# Trains the model with stochastic updates (one query per updates).
#
# Args:
# train_set: Dictionary of "paired" training data.
# params: Dictionary of hyper-paramters for training.
#
# Returns:
# Trained model, list of objectives, list of group constraint violations.
# Set random seed for reproducibility.
random.seed(333333)
np.random.seed(121212)
tf.random.set_seed(212121)
# Set up problem and model.
if params['constrained']:
# Constrained optimization.
if params['constraint_type'] == 'marginal_equal_opportunity':
constraint_groups = [((0, None), (1, None))]
elif params['constraint_type'] == 'cross_group_equal_opportunity':
constraint_groups = [((0, 1), (1, 0))]
else:
constraint_groups = [((0, 1), (1, 0)), ((0, 0), (1, 1))]
else:
# Unconstrained optimization.
constraint_groups = []
# Dictionary that will hold batch features pairs, group pairs and labels for
# current batch. We include one query per-batch.
paired_batch = {}
batch_index = 0 # Index of current query.
# Data functions.
feature_pairs = lambda: paired_batch['feature_pairs']
group_pairs = lambda: paired_batch['group_pairs']
features = lambda: paired_batch['features']
labels = lambda: paired_batch['labels']
# Create scoring model and constrained optimization problem.
problem, scoring_model = formulate_problem(
feature_pairs, group_pairs, features, labels, train_set['dimension'],
constraint_groups, params['constraint_slack'])
# Create a loss function for the problem.
lagrangian_loss, update_ops, multipliers_variables = (
tfco.create_lagrangian_loss(problem, dual_scale=params['dual_scale']))
# Create optimizer
optimizer = tf.keras.optimizers.Adagrad(learning_rate=params['learning_rate'])
# List of trainable variables.
var_list = (
scoring_model.trainable_weights + problem.trainable_variables +
[multipliers_variables])
# List of objectives, group constraint violations.
# violations, and snapshot of models during course of training.
objectives = []
group_violations = []
models = []
feature_pair_batches = train_set['feature_pairs']
group_pair_batches = train_set['group_pairs']
query_pairs = train_set['query_pairs']
feature_batches = train_set['features']
label_batches = train_set['labels']
queries = train_set['queries']
print()
# Run loops * iterations_per_loop full batch iterations.
for ii in range(params['loops']):
for jj in range(params['iterations_per_loop']):
# Populate paired_batch dict with all pairs for current query. The batch
# index is the same as the current query index.
paired_batch = {
'feature_pairs': feature_pair_batches[query_pairs == batch_index],
'group_pairs': group_pair_batches[query_pairs == batch_index],
'features': feature_batches[queries == batch_index],
'labels': label_batches[queries == batch_index]
}
# Optimize loss.
update_ops()
optimizer.minimize(lagrangian_loss, var_list=var_list)
# Update batch_index, and cycle back once last query is reached.
batch_index = (batch_index + 1) % train_set['num_queries']
# Snap shot current model.
model_copy = tf.keras.models.clone_model(scoring_model)
model_copy.set_weights(scoring_model.get_weights())
models.append(model_copy)
# Evaluate metrics for snapshotted model.
error, gerr, group_viol = evaluate_results(
scoring_model, train_set, params)
objectives.append(error)
group_violations.append(
[x - params['constraint_slack'] for x in group_viol])
sys.stdout.write(
'\r Loop %d: error = %.3f, max constraint violation = %.3f' %
(ii, objectives[-1], max(group_violations[-1])))
print()
if params['constrained']:
# Find model iterate that trades-off between objective and group violations.
best_index = tfco.find_best_candidate_index(
np.array(objectives), np.array(group_violations), rank_objectives=False)
else:
# Find model iterate that achieves lowest objective.
best_index = np.argmin(objectives)
return models[best_index]
Explanation: Train Model
The following function then trains the linear model by solving the above constrained optimization problem. We first provide a training function with minibatch gradient updates. There are three types of pairwise fairness criterion we handle (specified by 'constraint_type'), and assign the (pos_group, neg_group) pairs to compare accordingly.
End of explanation
def evaluate_results(model, test_set, params):
# Returns sqaured error, group error rates, group-level constraint violations.
if params['constraint_type'] == 'marginal_equal_opportunity':
g0_error = group_error_rate(model, test_set, 0)
g1_error = group_error_rate(model, test_set, 1)
group_violations = [g0_error - g1_error, g1_error - g0_error]
return (mean_squared_error(model, test_set), [g0_error, g1_error],
group_violations)
else:
g00_error = group_error_rate(model, test_set, 0, 0)
g01_error = group_error_rate(model, test_set, 0, 1)
g10_error = group_error_rate(model, test_set, 1, 1)
g11_error = group_error_rate(model, test_set, 1, 1)
group_violations_offdiag = [g01_error - g10_error, g10_error - g01_error]
group_violations_diag = [g00_error - g11_error, g11_error - g00_error]
if params['constraint_type'] == 'cross_group_equal_opportunity':
return (mean_squared_error(model, test_set),
[[g00_error, g01_error], [g10_error, g11_error]],
group_violations_offdiag)
else:
return (mean_squared_error(model, test_set),
[[g00_error, g01_error], [g10_error, g11_error]],
group_violations_offdiag + group_violations_diag)
def display_results(
model, test_set, params, method, error_type, show_header=False):
# Prints evaluation results for model on test data.
error, group_error, diffs = evaluate_results(model, test_set, params)
if params['constraint_type'] == 'marginal_equal_opportunity':
if show_header:
print('\nMethod\t\t\tError\t\tMSE\t\tGroup 0\t\tGroup 1\t\tDiff')
print('%s\t%s\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f' % (
method, error_type, error, group_error[0], group_error[1],
np.max(diffs)))
elif params['constraint_type'] == 'cross_group_equal_opportunity':
if show_header:
print('\nMethod\t\t\tError\t\tMSE\t\tGroup 0/1\tGroup 1/0\tDiff')
print('%s\t%s\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f' % (
method, error_type, error, group_error[0][1], group_error[1][0],
np.max(diffs)))
else:
if show_header:
print('\nMethod\t\t\tError\t\MSE\t\tGroup 0/1\tGroup 1/0\t' +
'Group 0/0\tGroup 1/1\tDiff')
print('%s\t%s\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f' % (
method, error_type, error, group_error[0][1], group_error[1][0],
group_error[0][0], group_error[1][1], np.max(diffs)))
Explanation: Summarize and Plot Results
Having trained a model, we will need functions to summarize the various evaluation metrics.
End of explanation
# Convert train/test set to paired data for later evaluation.
paired_train_set = convert_labeled_to_paired_data(train_set)
paired_test_set = convert_labeled_to_paired_data(test_set)
Explanation: Experimental Results
We now run experiments with two types of pairwise fairness criteria: (1) marginal_equal_opportunity and (2) pairwise equal opportunity. In each case, we compare an unconstrained model trained to optimize just the squared error and a constrained model trained with pairwise fairness constraints.
End of explanation
# Model hyper-parameters.
model_params = {
'loops': 10,
'iterations_per_loop': 250,
'learning_rate': 0.1,
'constraint_type': 'marginal_equal_opportunity',
'constraint_slack': 0.02,
'dual_scale': 1.0}
# Unconstrained optimization.
model_params['constrained'] = False
model_unc = train_model(paired_train_set, model_params)
display_results(model_unc, paired_train_set, model_params, 'Unconstrained ',
'Train', show_header=True)
display_results(model_unc, paired_test_set, model_params, 'Unconstrained ',
'Test')
# Constrained optimization with TFCO.
model_params['constrained'] = True
model_con = train_model(paired_train_set, model_params)
display_results(model_con, paired_train_set, model_params, 'Constrained ',
'Train', show_header=True)
display_results(model_con, paired_test_set, model_params, 'Constrained ',
'Test')
Explanation: (1) Marginal Equal Opportunity
For a scoring model $f: \mathbb{R}^d \rightarrow \mathbb{R}$, recall:
- $sqerr(f)$ as the squared error for scoring function $f$.
and we additionally define:
$err_i(f)$ as the row-marginal pairwise error over example pairs where the higher label example is from group $i$, and the lower label is from either groups
$$
err_i(f) = \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i\big]
$$
The constrained optimization problem we solve constraints the row-marginal pairwise errors to be similar:
$$min_f\;sqerr(f)$$
$$\text{s.t. }\;|err_0(f) - err_1(f)| \leq 0.02$$
End of explanation
# Model hyper-parameters.
model_params = {
'loops': 10,
'iterations_per_loop': 250,
'learning_rate': 0.1,
'constraint_type': 'cross_group_equal_opportunity',
'constraint_slack': 0.02,
'dual_scale': 1.0}
# Unconstrained optimization.
model_params['constrained'] = False
model_unc = train_model(paired_train_set, model_params)
display_results(model_unc, paired_train_set, model_params, 'Unconstrained ',
'Train', show_header=True)
display_results(model_unc, paired_test_set, model_params, 'Unconstrained ',
'Test')
# Constrained optimization with TFCO.
model_params['constrained'] = True
model_con = train_model(paired_train_set, model_params)
display_results(model_con, paired_train_set, model_params, 'Constrained ',
'Train', show_header=True)
display_results(model_con, paired_test_set, model_params, 'Constrained ',
'Test')
Explanation: (2) Pairwise Equal Opportunity
Recall that we denote
$err_{i,j}(f)$ as the pairwise error over example pairs where the higher label example is from group $i$, and the lower label example is from group $j$.
$$
err_{i, j}(f) ~=~ \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i, ~grp(x') = j\big]
$$
We first constrain only the cross-group errors, highlighted below.
<br>
<table border='1' bordercolor='black'>
<tr >
<td bgcolor='white'> </td>
<td bgcolor='white'> </td>
<td bgcolor='white' colspan=2 align=center><b>Negative</b></td>
</tr>
<tr>
<td bgcolor='white'></td>
<td bgcolor='white'></td>
<td>Group 0</td>
<td>Group 1</td>
</tr>
<tr>
<td bgcolor='white' rowspan=2><b>Positive</b></td>
<td bgcolor='white'>Group 0</td>
<td bgcolor='white'>$err_{0,0}$</td>
<td bgcolor='white'>$\mathbf{err_{0,1}}$</td>
</tr>
<tr>
<td>Group 1</td>
<td bgcolor='white'>$\mathbf{err_{1,0}}$</td>
<td bgcolor='white'>$err_{1,1}$</td>
</tr>
</table>
<br>
The optimization problem we solve constraints the cross-group pairwise errors to be similar:
$$min_f\; sqerr(f)$$
$$\text{s.t. }\;\; |err_{0,1}(f) - err_{1,0}(f)| \leq 0.02$$
End of explanation
<END_TASK> |
15,908 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show the evoked data
Step4: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1. | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
Explanation: Set parameters
End of explanation
method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01))
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None, rank=None,
method_params=method_params)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
Explanation: Compute covariance using automated regularization
End of explanation
evoked = epochs.average()
evoked.plot(time_unit='s') # plot evoked response
Explanation: Show the evoked data:
End of explanation
evoked.plot_white(noise_covs, time_unit='s')
Explanation: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1.
End of explanation
<END_TASK> |
15,909 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
← Back to Index
NumPy and SciPy
The quartet of NumPy, SciPy, Matplotlib, and IPython is a popular combination in the Python world. We will use each of these libraries in this workshop.
Tutorial
NumPy is one of the most popular libraries for numerical computing in the world. It is used in several disciplines including image processing, finance, bioinformatics, and more. This entire workshop is based upon NumPy and its derivatives.
If you are new to NumPy, follow this NumPy Tutorial.
SciPy is a Python library for scientific computing which builds on top of NumPy. If NumPy is like the Matlab core, then SciPy is like the Matlab toolboxes. It includes support for linear algebra, sparse matrices, spatial data structions, statistics, and more.
While there is a SciPy Tutorial, it isn't critical that you follow it for this workshop.
Special Arrays
Step1: Slicing Arrays
Step2: The optional third parameter indicates the increment value
Step3: If you omit the start index, the slice implicitly starts from zero
Step4: Array Arithmetic
Step5: dot computes the dot product, or inner product, between arrays or matrices.
Step6: Boolean Operations
Step7: Distance Metrics
Step8: The cosine distance measures the angle between two vectors
Step9: Sorting
NumPy arrays have a method, sort, which sorts the array in-place.
Step10: numpy.argsort returns an array of indices, ind, such that x[ind] is a sorted version of x. | <ASSISTANT_TASK:>
Python Code:
print numpy.arange(5)
print numpy.linspace(0, 5, 10, endpoint=False)
print numpy.zeros(5)
print numpy.ones(5)
print numpy.ones((5,2))
print scipy.randn(5) # random Gaussian, zero-mean unit-variance
print scipy.randn(5,2)
Explanation: ← Back to Index
NumPy and SciPy
The quartet of NumPy, SciPy, Matplotlib, and IPython is a popular combination in the Python world. We will use each of these libraries in this workshop.
Tutorial
NumPy is one of the most popular libraries for numerical computing in the world. It is used in several disciplines including image processing, finance, bioinformatics, and more. This entire workshop is based upon NumPy and its derivatives.
If you are new to NumPy, follow this NumPy Tutorial.
SciPy is a Python library for scientific computing which builds on top of NumPy. If NumPy is like the Matlab core, then SciPy is like the Matlab toolboxes. It includes support for linear algebra, sparse matrices, spatial data structions, statistics, and more.
While there is a SciPy Tutorial, it isn't critical that you follow it for this workshop.
Special Arrays
End of explanation
x = numpy.arange(10)
print x[2:4]
print x[-1]
Explanation: Slicing Arrays
End of explanation
print x[0:8:2]
print x[4:2:-1]
Explanation: The optional third parameter indicates the increment value:
End of explanation
print x[:4]
print x[:999]
print x[::-1]
Explanation: If you omit the start index, the slice implicitly starts from zero:
End of explanation
x = numpy.arange(5)
y = numpy.ones(5)
print x+2*y
Explanation: Array Arithmetic
End of explanation
x = scipy.randn(5)
y = numpy.ones(5)
print numpy.dot(x, y)
x = scipy.randn(5,3)
y = numpy.ones((3,2))
print numpy.dot(x, y)
Explanation: dot computes the dot product, or inner product, between arrays or matrices.
End of explanation
x = numpy.arange(10)
print x < 5
y = numpy.ones(10)
print x < y
Explanation: Boolean Operations
End of explanation
from scipy.spatial import distance
print distance.euclidean([0, 0], [3, 4])
print distance.sqeuclidean([0, 0], [3, 4])
print distance.cityblock([0, 0], [3, 4])
print distance.chebyshev([0, 0], [3, 4])
Explanation: Distance Metrics
End of explanation
print distance.cosine([67, 0], [89, 0])
print distance.cosine([67, 0], [0, 89])
Explanation: The cosine distance measures the angle between two vectors:
End of explanation
x = scipy.randn(5)
print x
x.sort()
print x
Explanation: Sorting
NumPy arrays have a method, sort, which sorts the array in-place.
End of explanation
x = scipy.randn(5)
print x
ind = numpy.argsort(x)
print ind
print x[ind]
Explanation: numpy.argsort returns an array of indices, ind, such that x[ind] is a sorted version of x.
End of explanation
<END_TASK> |
15,910 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Deep Convolutional Neural Network in TensorFlow
In this notebook, we convert our LeNet-5-inspired, MNIST-classifying, deep convolutional network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's style.
Load dependencies
Step1: Load data
Step2: Set neural network hyperparameters
Step3: Set parameters for each layer
Step4: Define placeholder Tensors for inputs and labels
Step5: Define types of layers
Step6: Design neural network architecture
Step7: Define dictionaries for storing weights and biases for each layer -- and initialize
Step8: Build model
Step9: Define model's loss and its optimizer
Step10: Define evaluation metrics
Step11: Create op for variable initialization
Step12: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress) | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
Explanation: Deep Convolutional Neural Network in TensorFlow
In this notebook, we convert our LeNet-5-inspired, MNIST-classifying, deep convolutional network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's style.
Load dependencies
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Load data
End of explanation
epochs = 20
batch_size = 128
display_progress = 40 # after this many batches, output progress to screen
wt_init = tf.contrib.layers.xavier_initializer() # weight initializer
Explanation: Set neural network hyperparameters
End of explanation
# input layer:
n_input = 784
# first convolutional layer:
n_conv_1 = 32
k_conv_1 = 3 # k_size
# second convolutional layer:
n_conv_2 = 64
k_conv_2 = 3
# max pooling layer:
pool_size = 2
mp_layer_dropout = 0.25
# dense layer:
n_dense = 128
dense_layer_dropout = 0.5
# output layer:
n_classes = 10
Explanation: Set parameters for each layer
End of explanation
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
Explanation: Define placeholder Tensors for inputs and labels
End of explanation
# dense layer with ReLU activation:
def dense(x, W, b):
z = tf.add(tf.matmul(x, W), b)
a = tf.nn.relu(z)
return a
# convolutional layer with ReLU activation:
def conv2d(x, W, b, stride_length=1):
xW = tf.nn.conv2d(x, W, strides=[1, stride_length, stride_length, 1], padding='SAME')
z = tf.nn.bias_add(xW, b)
a = tf.nn.relu(z)
return a
# max-pooling layer:
def maxpooling2d(x, p_size):
return tf.nn.max_pool(x,
ksize=[1, p_size, p_size, 1],
strides=[1, p_size, p_size, 1],
padding='SAME')
Explanation: Define types of layers
End of explanation
def network(x, weights, biases, n_in, mp_psize, mp_dropout, dense_dropout):
# reshape linear MNIST pixel input into square image:
square_dimensions = int(np.sqrt(n_in))
square_x = tf.reshape(x, shape=[-1, square_dimensions, square_dimensions, 1])
# convolutional and max-pooling layers:
conv_1 = conv2d(square_x, weights['W_c1'], biases['b_c1'])
conv_2 = conv2d(conv_1, weights['W_c2'], biases['b_c2'])
pool_1 = maxpooling2d(conv_2, mp_psize)
pool_1 = tf.nn.dropout(pool_1, 1-mp_dropout)
# dense layer:
flat = tf.reshape(pool_1, [-1, weights['W_d1'].get_shape().as_list()[0]])
dense_1 = dense(flat, weights['W_d1'], biases['b_d1'])
dense_1 = tf.nn.dropout(dense_1, 1-dense_dropout)
# output layer:
out_layer_z = tf.add(tf.matmul(dense_1, weights['W_out']), biases['b_out'])
return out_layer_z
Explanation: Design neural network architecture
End of explanation
bias_dict = {
'b_c1': tf.Variable(tf.zeros([n_conv_1])),
'b_c2': tf.Variable(tf.zeros([n_conv_2])),
'b_d1': tf.Variable(tf.zeros([n_dense])),
'b_out': tf.Variable(tf.zeros([n_classes]))
}
# calculate number of inputs to dense layer:
full_square_length = np.sqrt(n_input)
pooled_square_length = int(full_square_length / pool_size)
dense_inputs = pooled_square_length**2 * n_conv_2
weight_dict = {
'W_c1': tf.get_variable('W_c1',
[k_conv_1, k_conv_1, 1, n_conv_1], initializer=wt_init),
'W_c2': tf.get_variable('W_c2',
[k_conv_2, k_conv_2, n_conv_1, n_conv_2], initializer=wt_init),
'W_d1': tf.get_variable('W_d1',
[dense_inputs, n_dense], initializer=wt_init),
'W_out': tf.get_variable('W_out',
[n_dense, n_classes], initializer=wt_init)
}
Explanation: Define dictionaries for storing weights and biases for each layer -- and initialize
End of explanation
predictions = network(x, weight_dict, bias_dict, n_input,
pool_size, mp_layer_dropout, dense_layer_dropout)
Explanation: Build model
End of explanation
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Define model's loss and its optimizer
End of explanation
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy_pct = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) * 100
Explanation: Define evaluation metrics
End of explanation
initializer_op = tf.global_variables_initializer()
Explanation: Create op for variable initialization
End of explanation
with tf.Session() as session:
session.run(initializer_op)
print("Training for", epochs, "epochs.")
# loop over epochs:
for epoch in range(epochs):
avg_cost = 0.0 # track cost to monitor performance during training
avg_accuracy_pct = 0.0
# loop over all batches of the epoch:
n_batches = int(mnist.train.num_examples / batch_size)
for i in range(n_batches):
# to reassure you something's happening!
if i % display_progress == 0:
print("Step ", i+1, " of ", n_batches, " in epoch ", epoch+1, ".", sep='')
batch_x, batch_y = mnist.train.next_batch(batch_size)
# feed batch data to run optimization and fetching cost and accuracy:
_, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct],
feed_dict={x: batch_x, y: batch_y})
# accumulate mean loss and accuracy over epoch:
avg_cost += batch_cost / n_batches
avg_accuracy_pct += batch_acc / n_batches
# output logs at end of each epoch of training:
print("Epoch ", '%03d' % (epoch+1),
": cost = ", '{:.3f}'.format(avg_cost),
", accuracy = ", '{:.2f}'.format(avg_accuracy_pct), "%",
sep='')
print("Training Complete. Testing Model.\n")
test_cost = cost.eval({x: mnist.test.images, y: mnist.test.labels})
test_accuracy_pct = accuracy_pct.eval({x: mnist.test.images, y: mnist.test.labels})
print("Test Cost:", '{:.3f}'.format(test_cost))
print("Test Accuracy: ", '{:.2f}'.format(test_accuracy_pct), "%", sep='')
Explanation: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress)
End of explanation
<END_TASK> |
15,911 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sparse CNN
The idea of the sparse CNN is to utilize only the important values of its filters and set those less impotant to zero.
For terminology, we'll call the weights of a specific channel a "filter", and we'll call the values of each filter a "connection".
Via Hebian inspired learning, those connections that see high absolute-values in both their inputs and outputs will be favored.
This commparison of input and output will be refered to as the connection's strength.
Connection Strength Calculations
Initialize a typical conv layer to start.
From here, three seperate methods will be attempted to calculate the strength of the connections.
Step3: Vectorized Method
The idea of this method is to utilize convolutional arithmetic to determine the input for a given output unit and a given connection.
Suppose we initialize a weight matrix of exactly the same dimensions of our orignal conv layer, and set all of filters to 0 except for one connection. That is,
new_conv.weight[
Step5: Vectorized Method + Grouping
Same as the previous method, but utilizing the grouping argument of the conv layer
so that only one is needed
Step7: Vectorized Method with Less Redundancies
Step9: Brute Force Method
Computationally speaking, this is the same method as the preivous two. Only now, instead of using conv layers to assist in the computations, we use for loops the brute force our way through.
This is more so for a sanity check on the first two to validate their outputs.
Step10: Validation \ Time Trials
Quick test to make sure they all give the same output.
Let's see how long the take to run.
Step13: Sparse Conv Layer
Now to implement a conv layer that utilizes the second implementation above.
Step14: Test Training a Network
The following is a simple toy example copied mostly from
Step15: Time Trials
Quick test to compare runtime with and without updating the connections tensor.
Step16: Network Setup
Step17: Training
Step18: Testing Accuracy | <ASSISTANT_TASK:>
Python Code:
c_out = 7
c_in = 8
conv = torch.nn.Conv2d(c_in, c_out, kernel_size=(2, 2), stride=(1, 1), padding=0, dilation=1, groups=1)
input_tensor = torch.randn(2, c_in, 5, 3)
output_tensor = conv(input_tensor)
# Get params of conv layer.
in_channels = conv.in_channels # Number of channels in the input image
out_channels = conv.out_channels # Number of channels produced by the convolution
kernel_size = conv.kernel_size # Size of the convolving kernel
stride = conv.stride # Stride of the convolution. Default: 1
padding = conv.padding # Zero-padding added to both sides of the input. Default: 0
padding_mode = conv.padding_mode # zeros
dilation = conv.dilation # Spacing between kernel elements. Default: 1
groups = conv.groups # Number of blocked connections from input channels to output channels. Default: 1
bias = conv.bias is not None
Explanation: Sparse CNN
The idea of the sparse CNN is to utilize only the important values of its filters and set those less impotant to zero.
For terminology, we'll call the weights of a specific channel a "filter", and we'll call the values of each filter a "connection".
Via Hebian inspired learning, those connections that see high absolute-values in both their inputs and outputs will be favored.
This commparison of input and output will be refered to as the connection's strength.
Connection Strength Calculations
Initialize a typical conv layer to start.
From here, three seperate methods will be attempted to calculate the strength of the connections.
End of explanation
def get_single_unit_conv(c, j, h, **kwargs):
Constructs and returns conv layer with trainging disabled and
all zero weights except along the output channels for unit
specified as (c, j, h).
# Construct conv.
conv = torch.nn.Conv2d(**kwargs)
# Turn off training.
conv.train = False
# Set weights to zero except those specified.
with torch.no_grad():
conv.weight.set_(torch.zeros_like(conv.weight))
conv.weight[:, c, j, h] = 1
return conv
# Get inidices that loop over all connections.
single_unit_indxs = list(itertools.product(*[range(d) for d in conv.weight.shape[1:]]))
single_unit_convs = [
get_single_unit_conv(
c, j, h,
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
padding_mode=padding_mode,
dilation=dilation,
groups=groups,
bias=False,
)
for c, j, h in single_unit_indxs
]
def f1():
Calculate connection strengths.
H = torch.zeros_like(conv.weight)
s1 = torch.sigmoid(output_tensor).gt_(0.5)
for idx, sconv in zip(single_unit_indxs, single_unit_convs):
s2 = torch.sigmoid(sconv(input_tensor)).gt_(0.5)
m = torch.sum(s1.mul(s2), (0, 2, 3,))
H[:, idx[0], idx[1], idx[2]] += m
return H
Explanation: Vectorized Method
The idea of this method is to utilize convolutional arithmetic to determine the input for a given output unit and a given connection.
Suppose we initialize a weight matrix of exactly the same dimensions of our orignal conv layer, and set all of filters to 0 except for one connection. That is,
new_conv.weight[:, c, j, h] = 1.
Now if we pass the input through new_conv, we'll be given an output tensor of the same size as the original, but with the input values arranged at the locations of their respective output through the connection. That is,
```
old_output = conv[input]
new_output = new_conv[input]
==> for all b, j, and h (b being the batch), we have
new_output[b, :, j, h] = input[<indices of input passed through connection conv.weight[:, c, j, h]>]
examine_connections(old_output, new_output) # done in pointwise fashion
``
With this vectorized calculation, we may then loop over all combinations ofc,j, andh`, compare the outputs to their respective inputs, and populate a matrix to record the strengths.
End of explanation
def get_single_unit_weights(shape, c, j, h, **kwargs):
Constructs and returns conv layer with traingin diabled and
all zero weights except along the output channels for unit
specified as (c, j, h).
# Construct weight.
weight = torch.zeros(shape)
# Set weights to zero except those specified.
weight[:, c, j, h] = 1
return weight
# Compute inidices that loop over all connections of a channel.
filter_indxs = list(itertools.product(*[range(d) for d in conv.weight.shape[1:]]))
# Compute indeces that loop over all channels and filters.
# This will be used to unpack the pointwise comparisons of the output.
connection_indxs = []
for idx in filter_indxs:
i_ = list(idx)
connection_indxs.extend([
[c_]+i_ for c_ in range(out_channels)
])
connection_indxs = list(zip(*connection_indxs))
# Create new conv layer that groups it's input and output.
new_groups = len(filter_indxs)
stacked_conv = torch.nn.Conv2d(
in_channels=in_channels * new_groups,
out_channels=out_channels * new_groups,
kernel_size=kernel_size,
stride=stride,
padding=padding,
padding_mode=padding_mode,
dilation=dilation,
groups=groups * new_groups,
bias=False,
)
# Populate the weight matrix with stacked tensors having only one non-zero unit.
single_unit_weights = [
get_single_unit_weights(
conv.weight.shape,
c, j, h,
)
for c, j, h in filter_indxs
]
with torch.no_grad():
stacked_conv.weight.set_(torch.cat(single_unit_weights, dim=0))
def f2():
# print('------f2a--------')
# print('input ', input_tensor.shape)
stacked_input = input_tensor.repeat((1, new_groups, 1, 1))
stacked_output = stacked_conv(stacked_input)
# print('stacked_input ', stacked_input.shape)
# print('stacked_output', stacked_output.shape)
H = torch.zeros_like(conv.weight)
s1 = torch.sigmoid(stacked_output).gt_(0.5)
s2 = torch.sigmoid(output_tensor).gt_(0.5).repeat((1, new_groups, 1, 1))
print('s1', s1.shape)
print('s2', s2.shape)
H_ = torch.sum(s2.mul(s1), (0, 2, 3,))
# print('H_', H_.shape)
H[connection_indxs] = H_
# print('\n')
return H
Explanation: Vectorized Method + Grouping
Same as the previous method, but utilizing the grouping argument of the conv layer
so that only one is needed
End of explanation
def get_single_unit_weights_2b(shape, c, j, h, **kwargs):
Constructs and returns conv layer with traingin diabled and
all zero weights except along the output channels for unit
specified as (c, j, h).
# Construct weight.
weight = torch.zeros(1, *shape[1:])
# Set weights to zero except those specified.
weight[0, c, j, h] = 1
return weight
# Compute inidices that loop over all connections of a channel.
filter_indxs_2b = list(itertools.product(*[range(d) for d in conv.weight.shape[1:]]))
# Compute indeces that loop over all channels and filters.
# This will be used to unpack the pointwise comparisons of the output.
connection_indxs_2b = []
for c_ in range(out_channels):
for idx in filter_indxs_2b:
i_ = list(idx)
connection_indxs_2b.append([c_] + i_)
connection_indxs_2b = list(zip(*connection_indxs_2b))
new_groups_2b = int(np.prod(conv.weight.shape[1:]))
perm_indices_2b = []
for c_i in range(out_channels):
perm_indices_2b.extend(
[c_i] * new_groups_2b
)
# Create new conv layer that groups it's input and output.
stacked_conv_2b = torch.nn.Conv2d(
in_channels=in_channels * new_groups_2b,
out_channels=new_groups_2b,
kernel_size=kernel_size,
stride=stride,
padding=padding,
padding_mode=padding_mode,
dilation=dilation,
groups=groups * new_groups_2b,
bias=False,
)
# Populate the weight matrix with stacked tensors having only one non-zero unit.
single_unit_weights_2b = [
get_single_unit_weights_2b(
conv.weight.shape,
c, j, h,
)
for c, j, h in filter_indxs_2b
]
with torch.no_grad():
stacked_conv_2b.weight.set_(torch.cat(single_unit_weights_2b, dim=0))
def f2b():
stacked_input = input_tensor.repeat((1, new_groups_2b, 1, 1))
stacked_output = stacked_conv_2b(stacked_input).repeat((1, out_channels, 1, 1))
H = torch.zeros_like(conv.weight)
s1 = stacked_output
s2 = output_tensor[:, perm_indices_2b, ...]
mu_in = s1.mean(dim=0)
mu_out = s2.mean(dim=0)
std_in = s1.std(dim=0)
std_out = s2.std(dim=0)
corr = ((s1 - mu_in) * (s2 - mu_out)).mean(dim=0) / (std_in * std_out)
corr[torch.where((std_in == 0 ) | (std_out == 0 ))] = 0
corr = corr.abs()
H_ = torch.mean(corr, (1, 2))
H[connection_indxs_2b] = H_
return H
Explanation: Vectorized Method with Less Redundancies
End of explanation
def coactivation(t1, t2):
s = (torch.sigmoid(t1) > 0.5) * (torch.sigmoid(t2) > 0.5)
return s
def get_indeces_of_input_and_filter(n, m):
Assumes dilation=1, i.e. typical conv.
k1, k2 = kernel_size
p1, p2 = padding
s1, s2 = stride
i1, i2 = (0, 0)
i1 -= p1
i2 -= p2
i1 += n * s1
i2 += m * s2
if i2 == 2:
import ipdb; ipdb.set_trace()
indxs = []
for c_in in range(in_channels):
for n_k1 in range(k1):
for m_k2 in range(k2):
filter_indx = (c_in, n_k1, m_k2)
input_indx = (c_in, i1 + n_k1, i2 + m_k2)
indxs.append((input_indx, filter_indx))
return indxs
B = output_tensor.shape[0]
N_out = output_tensor.shape[2]
M_out = output_tensor.shape[3]
C_in = conv.weight.shape[1]
C_out = conv.weight.shape[0]
def f3():
H = torch.zeros_like(conv.weight)
for b in range(B):
for c_out in range(C_out):
for n_out in range(N_out):
for m_out in range(M_out):
unit_1 = output_tensor[b, c_out, n_out, m_out]
indxs = get_indeces_of_input_and_filter(n_out, m_out)
for input_indx, filter_indx in indxs:
c_in, n_in, m_in = input_indx
c_fl, n_fl, m_fl = filter_indx
unit_2 = input_tensor[b, c_in, n_in, m_in]
if coactivation(unit_1, unit_2):
H[c_out, c_fl, n_fl, m_fl] += 1
return H
Explanation: Brute Force Method
Computationally speaking, this is the same method as the preivous two. Only now, instead of using conv layers to assist in the computations, we use for loops the brute force our way through.
This is more so for a sanity check on the first two to validate their outputs.
End of explanation
assert f1().allclose(f2(), rtol=0, atol=0) and f2().allclose(f2b(), rtol=0, atol=0) and f2b().allclose(f3(), rtol=0, atol=0)
%timeit f1()
%timeit f2()
%timeit f2b()
# %timeit f3()
Explanation: Validation \ Time Trials
Quick test to make sure they all give the same output.
Let's see how long the take to run.
End of explanation
class DSConv2d(torch.nn.Conv2d):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.learning_iterations = 0
self.activity_threshold = 0.5
self.k1 = max(int(0.1 * np.prod(self.weight.shape[2:])), 1)
self.k2 = max(int(0.15 * np.prod(self.weight.shape[2:])), 1)
self.prune_dims = [0, 1] # Todo: sort
self.connections_tensor = torch.zeros_like(self.weight)
self.prune_mask = torch.ones_like(self.weight)
# Compute inidices that loop over all connections of a channel.
filter_indxs = list(itertools.product(*[range(d) for d in self.weight.shape[1:]]))
# Compute indeces that loop over all channels and filters.
# This will be used to unpack the pointwise comparisons of the output.
self.connection_indxs = []
for idx in filter_indxs:
i_ = list(idx)
self.connection_indxs.extend([
[c]+i_ for c in range(self.weight.shape[0])
])
self.connection_indxs = list(zip(*self.connection_indxs))
# Create new conv layer that groups it's input and output.
self.new_groups = len(filter_indxs)
self.stacked_conv = torch.nn.Conv2d(
in_channels=self.in_channels * self.new_groups,
out_channels=self.out_channels * self.new_groups,
kernel_size=self.kernel_size,
stride=self.stride,
padding=self.padding,
padding_mode=self.padding_mode,
dilation=self.dilation,
groups=self.groups * self.new_groups,
bias=False,
)
# Populate the weight matrix with stacked tensors having only one non-zero unit.
single_unit_weights = [
self.get_single_unit_weights(
self.weight.shape,
c, j, h,
)
for c, j, h in filter_indxs
]
with torch.no_grad():
self.stacked_conv.weight.set_(torch.cat(single_unit_weights, dim=0))
def get_single_unit_weights(self, shape, c, j, h):
Constructs and returns conv layer with traingin diabled and
all zero weights except along the output channels for unit
specified as (c, j, h).
# Construct weight.
weight = torch.zeros(self.weight.shape)
# Set weights to zero except those specified.
weight[:, c, j, h] = 1
return weight
def update_connections_tensor(self, input_tensor, output_tensor):
with torch.no_grad():
stacked_input = input_tensor.repeat((1, self.new_groups, 1, 1))
stacked_output = self.stacked_conv(stacked_input)
s1 = torch.sigmoid(stacked_output).gt_(0.5)
s2 = torch.sigmoid(output_tensor).gt_(0.5).repeat((1, self.new_groups, 1, 1))
H_ = torch.sum(s2.mul(s1), (0, 2, 3,))
self.connections_tensor[self.connection_indxs] = H_
def progress_connections(self):
Prunes and add connections.
with torch.no_grad():
# Get strengths of all connections.
strengths = self.connections_tensor.numpy()
shape = strengths.shape
# Determine all combinations of prune dimensions
all_dims = range(len(shape))
prune_indxs = [range(shape[d]) if d in self.prune_dims else [slice(None)] for d in all_dims]
prune_indxs = itertools.product(*prune_indxs)
# Along all combinations of prune dimensions:
# - Keep strongest k1 connections
# - Reinitilize trailing k2 - k1 connections.
k1 = self.k1
k2 = self.k2
for idx in prune_indxs:
# Get top k1'th strength.
s = strengths[idx].flatten()
v1 = np.partition(s, -k1)[-k1] # s.kthvalue(len(s) - k1).value
# Keep top k1'th connection - prune those below
c = self.weight[idx].flatten()
prune_mask = (s < v1).astype(np.uint8)
c[prune_mask] = 0
# Get trailing k2 - k1 connections.
v2 = np.partition(s, -k2)[-k2] # s.kthvalue(len(s) - k2).value
new_mask = (s > v2) & prune_mask
# Reinitilized trailing k2 - k1 connections.
# Note: [None, :] is added here as kaiming_uniform_ requires a 2d tensor.
if len(c[new_mask]) > 0:
torch.nn.init.kaiming_uniform_(c[new_mask][None, :])
# Reshape connections and update the weight.
self.weight[idx] = c.reshape(self.weight[idx].shape)
self.prune_mask = prune_mask
# Reset connection strengths.
self.connections_tensor = torch.zeros_like(self.weight)
def prune_randomly(self):
with torch.no_grad():
prune_mask = torch.rand(self.weight.shape) < 0.85 # prune 15% of weights
self.weight[prune_mask] = 0
# Reinitialize those that are zero.
keep_mask = ~prune_mask
new_mask = (self.weight == 0) & keep_mask
new_weights = self.weight[new_mask]
if len(new_weights) > 0:
torch.nn.init.kaiming_uniform_(new_weights[None, :])
self.weight[new_mask] = new_weights
def __call__(self, input_tensor, *args, **kwargs):
output_tensor = super().__call__(input_tensor, *args, **kwargs)
if self.learning_iterations % 20 == 0:
self.update_connections_tensor(input_tensor, output_tensor)
self.learning_iterations += 1
return output_tensor
Explanation: Sparse Conv Layer
Now to implement a conv layer that utilizes the second implementation above.
End of explanation
root_path = '~/nta/datasets'
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root=root_path, train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root=root_path, train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
Explanation: Test Training a Network
The following is a simple toy example copied mostly from:
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
The main difference here is that the network utilizes the SparseCNN module.
This exercise moslty servesto gain confidence in the implementation with respect to it's ability to run without errors - this is not concerned with verifying training improvements just yet.
Load Data
End of explanation
cd = torch.nn.Conv2d(3, 6, 5)
cs = DSConv2d(3, 6, 5)
dataiter = iter(trainloader)
images, labels = dataiter.next()
print('Dense CNN foward pass:')
%timeit cd(images)
print('DSConv2d foward pass:')
%timeit cs(images)
Explanation: Time Trials
Quick test to compare runtime with and without updating the connections tensor.
End of explanation
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# self.conv1 = nn.Conv2d(3, 6, 5)
self.conv1 = DSConv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
# self.conv2 = nn.Conv2d(6, 16, 5)
self.conv2 = DSConv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
Explanation: Network Setup
End of explanation
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
net.conv1.progress_connections()
net.conv2.progress_connections()
break
break
# # Compare with pruning random weights.
# net.conv1.prune_randomly()
# net.conv2.prune_randomly()
print('Finished Training')
Explanation: Training
End of explanation
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
Explanation: Testing Accuracy
End of explanation
<END_TASK> |
15,912 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Compute ICA components on epochs
ICA is fit to MEG raw data.
We assume that the non-stationary EOG artifacts have already been removed.
The sources matching the ECG are automatically found and displayed.
Subsequently, artefact detection and rejection quality are assessed.
Finally, the impact on the evoked ERF is visualized.
Note that this example does quite a bit of processing, so even on a
fast machine it can take about a minute to complete.
Step1: Fit ICA model using the FastICA algorithm, detect and inspect components | <ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.preprocessing import ICA, create_ecg_epochs
from mne.datasets import sample
print(__doc__)
Explanation: Compute ICA components on epochs
ICA is fit to MEG raw data.
We assume that the non-stationary EOG artifacts have already been removed.
The sources matching the ECG are automatically found and displayed.
Subsequently, artefact detection and rejection quality are assessed.
Finally, the impact on the evoked ERF is visualized.
Note that this example does quite a bit of processing, so even on a
fast machine it can take about a minute to complete.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, method='iir')
raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True)
# longer + more epochs for more artifact exposure
events = mne.find_events(raw, stim_channel='STI 014')
epochs = mne.Epochs(raw, events, event_id=None, tmin=-0.2, tmax=0.5)
ica = ICA(n_components=0.95, method='fastica').fit(epochs)
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs)
ica.plot_components(ecg_inds)
Explanation: Fit ICA model using the FastICA algorithm, detect and inspect components
End of explanation
<END_TASK> |
15,913 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Class Session 4 Exercise
Step1: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
graph is stored in adjacency matrix format. So your function will accept as an input a NxN numpy matrix.
Step2: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in adjacency list format (a list of lists)
Step3: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in edge-list format (a numpy array of length-two-lists); use numpy.where and numpy.unique
Step4: This next function is the simulation funtion. "n" is the number of vertices.
It returns a length-three list containing the average running time for enumerating the neighbor vertices of a vertex in the graph.
Step5: A simulation with 1000 vertices clearly shows that adjacency list is fastest
Step6: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import igraph
import timeit
import itertools
Explanation: Class Session 4 Exercise:
Comparing asymptotic running time for enumerating neighbors of all vertices in a graph
We will measure the running time for enumerating the neighbor vertices for three different data structures for representing an undirected graph:
adjacency matrix
adjacency list
edge list
Let's assume that each vertex is labeled with a unique integer number. So if there are N vertices, the vertices are labeled 0, 2, 3, 4, ..., N-1.
First, we will import all of the Python modules that we will need for this exercise:
note how we assign a short name, "np" to the numpy module. This will save typing.
End of explanation
def enumerate_matrix(gmat, i):
return np.nonzero(gmat[i,:])[1].tolist()
Explanation: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
graph is stored in adjacency matrix format. So your function will accept as an input a NxN numpy matrix.
End of explanation
def enumerate_adj_list(adj_list, i):
return adj_list[i]
Explanation: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in adjacency list format (a list of lists):
End of explanation
def enumerate_edge_list(edge_list, i):
inds1 = np.where(edge_list[:,0] == i)[0]
elems1 = edge_list[inds1, 1].tolist()
inds2 = np.where(edge_list[:,1] == i)[0]
elems2 = edge_list[inds2, 0].tolist()
return np.unique(elems1 + elems2).tolist()
Explanation: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in edge-list format (a numpy array of length-two-lists); use numpy.where and numpy.unique:
End of explanation
def do_sim(n):
retlist = []
nrep = 10
nsubrep = 10
# this is (sort of) a Python way of doing the R function "replicate":
for _ in itertools.repeat(None, nrep):
# make a random undirected graph with fixed (average) vertex degree = 5
g = igraph.Graph.Barabasi(n, 5)
# get the graph in three different representations
g_matrix = np.matrix(g.get_adjacency().data)
g_adj_list = g.get_adjlist()
g_edge_list = np.array(g.get_edgelist())
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_matrix(g_matrix, i)
matrix_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_adj_list(g_adj_list, i)
adjlist_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_edge_list(g_edge_list, i)
edgelist_elapsed = timeit.default_timer() - start_time
retlist.append([matrix_elapsed, adjlist_elapsed, edgelist_elapsed])
# average over replicates and then
# divide by n so that the running time results are on a per-vertex basis
return np.mean(np.array(retlist), axis=0)/n
Explanation: This next function is the simulation funtion. "n" is the number of vertices.
It returns a length-three list containing the average running time for enumerating the neighbor vertices of a vertex in the graph.
End of explanation
do_sim(1000)*1000
Explanation: A simulation with 1000 vertices clearly shows that adjacency list is fastest:
(I multiply by 1000 just so the results are in ms.)
End of explanation
do_sim(2000)*1000
Explanation: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format:
End of explanation
<END_TASK> |
15,914 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Recommender Engine
Perhaps the most famous example of a recommender engine in the Data Science world was the Netflix competition started in 2006, in which teams from all around the world competed to improve on Netflix's reccomendation algorithm. The final prize of $1,000,000 was awarded to a team which developed a solution which had about a 10% increase in accuracy over Netflix's. In fact, this competition resulted in the development of some new techniques which are still in use. For more reading on this topic, see Simon Funk's blog post
In this exercise, you will build a collaborative-filter recommendatin engine using both a cosine similarity approach and SVD (singular value decomposition). Before proceding download the MovieLens dataset.
Importing and Pre-processing the Data
First familiarize yourself with the data you downloaded, and then import the u.data file and take a look at the first few rows.
Step1: Before building any recommendation engines, we'll have to get the data into a useful form. Do this by first splitting the data into testing and training sets, and then by constructing two new dataframes whose columns are each unique movie and rows are each unique user, filling in 0 for missing values.
Step7: Now split the data into a training and test set, using a ratio 80/20 for train/test.
Cosine Similarity
Building a recommendation engine can be thought of as "filling in the holes" in the sparse matrix you made above. For example, take a look at the MovieLense data. You'll see that that matrix is mostly zeros. Our task here is to predict what a given user will rate a given movie depending on the users tastes. To determine a users taste, we can use cosine similarity which is given by $$s_u^{cos}(u_k,u_a)
= \frac{ u_k \cdot u_a }{ \left \| u_k \right \| \left \| u_a \right \| }
= \frac{ \sum x_{k,m}x_{a,m} }{ \sqrt{\sum x_{k,m}^2\sum x_{a,m}^2} }$$
for users $u_a$ and $u_k$ on ratings given by $x_{a,m}$ and $x_{b,m}$. This is just the cosine of the angle between the two vectors. Likewise, this can also be calculated for the similarity between two items, $i_a$ and $i_b$, given by $$s_u^{cos}(i_m,i_b)
= \frac{ i_m \cdot i_b }{ \left \| i_m \right \| \left \| i_b \right \| }
= \frac{ \sum x_{a,m} x_{a,b} }{ \sqrt{ \sum x_{a,m}^2 \sum x_{a,b}^2 } }$$
Then, the similarity between two users is given by $$\hat{x}{k,m} = \bar{x}{k} + \frac{\sum\limits_{u_a} s_u^{cos}(u_k,u_a) (x_{a,m})}{\sum\limits_{u_a}|s_u^{cos}(u_k,u_a)|}$$ and for items given by $$\hat{x}{k,m} = \frac{\sum\limits{i_b} s_u^{cos}(i_m,i_b) (x_{k,b}) }{\sum\limits_{i_b}|s_u^{cos}(i_m,i_b)|}$$
Use these ideas to construct a class cos_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus
Step13: SVD
Above we used Cosine Similarity to fill the holes in our sparse matrix. Another, and much more popular, method for matrix completion is called a Singluar Value Decomposition. SVD factors our data matrix into three smaller matricies, given by $$\textbf{M} = \textbf{U} \bf{\Sigma} \textbf{V}^*$$ where $\textbf{M}$ is our data matrix, $\textbf{U}$ is a unitary matrix containg the latent variables in the user space, $\bf{\Sigma}$ is diagonal matrix containing the singular values of $\textbf{M}$, and $\textbf{V}$ is a unitary matrix containing the latent variables in the item space. For more information on the SVD see the Wikipedia article.
Numpy contains a package to estimate the SVD of a sparse matrix. By making estimates of the matricies $\textbf{U}$, $\bf{\Sigma}$, and $\textbf{V}$, and then by multiplying them together, we can reconstruct an estimate for the matrix $\textbf{M}$ with all the holes filled in.
Use these ideas to construct a class svd_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus
Step14: Overall RMSE of about 0.98
Step15: 7 is the optimal value of k in this case. Note that no cross-validation was performed!
Now we'll build the best recommender and recommend 5 movies to each user. | <ASSISTANT_TASK:>
Python Code:
# Importing the data
import pandas as pd
import numpy as np
header = ['user_id', 'item_id', 'rating', 'timestamp']
data_movie_raw = pd.read_csv('../data/ml-100k/u.data', sep='\t', names=header)
data_movie_raw.head()
Explanation: Recommender Engine
Perhaps the most famous example of a recommender engine in the Data Science world was the Netflix competition started in 2006, in which teams from all around the world competed to improve on Netflix's reccomendation algorithm. The final prize of $1,000,000 was awarded to a team which developed a solution which had about a 10% increase in accuracy over Netflix's. In fact, this competition resulted in the development of some new techniques which are still in use. For more reading on this topic, see Simon Funk's blog post
In this exercise, you will build a collaborative-filter recommendatin engine using both a cosine similarity approach and SVD (singular value decomposition). Before proceding download the MovieLens dataset.
Importing and Pre-processing the Data
First familiarize yourself with the data you downloaded, and then import the u.data file and take a look at the first few rows.
End of explanation
from sklearn.model_selection import train_test_split
# First split into train and test sets
data_train_raw, data_test_raw = train_test_split(data_movie_raw, train_size = 0.8)
# Turning to pivot tables
data_train = data_train_raw.pivot_table(index = 'user_id', columns = 'item_id', values = 'rating').fillna(0)
data_test = data_test_raw.pivot_table(index = 'user_id', columns = 'item_id', values = 'rating').fillna(0)
# Print the firest few rows
data_train.head()
Explanation: Before building any recommendation engines, we'll have to get the data into a useful form. Do this by first splitting the data into testing and training sets, and then by constructing two new dataframes whose columns are each unique movie and rows are each unique user, filling in 0 for missing values.
End of explanation
# Libraries
import pandas as pd
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
class cos_engine:
def __init__(self, data_all):
Constructor for cos_engine class
Args:
data_all: Raw dataset containing all movies to build
a list of movies already seen by each user.
# Create copy of data
self.data_all = data_all.copy()
# Now build a list of movies each of you has seen
self.seen = []
for user in data_all.user_id.unique():
cur_seen = {}
cur_seen["user"] = user
cur_seen["seen"] = self.data_all[data_all.user_id == user].item_id
self.seen.append(cur_seen)
def fit(self, data_train):
Performs cosine similarity on a sparse matrix data_train
Args:
data_train: A pandas data frame data to estimate cosine similarity
# Create a copy of the dataframe
self.data_train = data_train.copy()
# Save the indices and column names
self.users = self.data_train.index
self.items = self.data_train.columns
# Compute mean vectors
self.user_means = self.data_train.replace(0, np.nan).mean(axis = 1)
self.item_means = self.data_train.T.replace(0, np.nan).mean(axis = 1)
# Get similarity matrices and compute sums for normalization
# For non adjusted cosine similarity, neglect subtracting the means.
self.data_train_adj = (self.data_train.replace(0, np.nan).T - self.user_means).fillna(0).T
self.user_cos = cosine_similarity(self.data_train_adj)
self.item_cos = cosine_similarity(self.data_train_adj.T)
self.user_cos_sum = np.abs(self.user_cos).sum(axis = 1)
self.item_cos_sum = np.abs(self.item_cos).sum(axis = 1)
self.user_cos_sum = self.user_cos_sum.reshape(self.user_cos_sum.shape[0], 1)
self.item_cos_sum = self.item_cos_sum.reshape(self.item_cos_sum.shape[0], 1)
def predict(self, method = 'user'):
Predicts using Cosine Similarity
Args:
method: A string indicating what method to use, user or item.
Default user.
Returns:
A pandas dataframe containing the prediction values
# Store prediction locally and turn to dataframe
if method == 'user':
self.pred = self.user_means[:, np.newaxis] + ((self.user_cos @ self.data_train_adj) / self.user_cos_sum)
self.pred = pd.DataFrame(self.pred, index = data_train.index, columns = data_train.columns)
elif method == 'item':
self.pred = self.item_means[:, np.newaxis] + ((self.data_train @ self.item_cos) / self.item_cos_sum.T).T
self.pred = pd.DataFrame(self.pred, columns = data_train.index.values, index = data_train.columns)
return(self.pred)
def test(self, data_test, root = False):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
# Build a list of common indices (users) in the train and test set
row_idx = list(set(self.pred.index) & set(data_test.index))
# Prime the variables for loop
err = [] # To hold the Sum of Squared Errors
N = 0 # To count preditions for MSE calculation
for row in row_idx:
# Get the rows
test_row = data_test.loc[row, :]
pred_row = self.pred.loc[row, :]
# Get indices of nonzero elements in the test set
idx = test_row.index[test_row.nonzero()[0]]
# Get only common movies
temp_test = test_row[idx]
temp_pred = pred_row[idx]
# Compute error and count
temp_err = ((temp_pred - temp_test)**2).sum()
N = N + len(idx)
err.append(temp_err)
mse = np.sum(err) / N
# Switch for RMSE
if root:
err = np.sqrt(mse)
else:
err = mse
return(err)
def recommend(self, user, num_recs):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean
Squared Error otherwise.
# Get list of already seen movies for this user
idx_seen = next(item for item in self.seen if item["user"] == 2)["seen"]
# Remove already seen movies and recommend
rec = self.pred.loc[user, :].drop(idx_seen).nlargest(num_recs)
return(rec.index)
# Testing
cos_en = cos_engine(data_movie_raw)
cos_en.fit(data_train)
# Predict using user similarity
pred1 = cos_en.predict(method = 'user')
err = cos_en.test(data_test, root = True)
rec1 = cos_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for user 1:", rec1.values)
# And now with item
pred2 = cos_en.predict(method = 'item')
err = cos_en.test(data_test, root = True)
rec2 = cos_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for item 1:", rec2.values)
Explanation: Now split the data into a training and test set, using a ratio 80/20 for train/test.
Cosine Similarity
Building a recommendation engine can be thought of as "filling in the holes" in the sparse matrix you made above. For example, take a look at the MovieLense data. You'll see that that matrix is mostly zeros. Our task here is to predict what a given user will rate a given movie depending on the users tastes. To determine a users taste, we can use cosine similarity which is given by $$s_u^{cos}(u_k,u_a)
= \frac{ u_k \cdot u_a }{ \left \| u_k \right \| \left \| u_a \right \| }
= \frac{ \sum x_{k,m}x_{a,m} }{ \sqrt{\sum x_{k,m}^2\sum x_{a,m}^2} }$$
for users $u_a$ and $u_k$ on ratings given by $x_{a,m}$ and $x_{b,m}$. This is just the cosine of the angle between the two vectors. Likewise, this can also be calculated for the similarity between two items, $i_a$ and $i_b$, given by $$s_u^{cos}(i_m,i_b)
= \frac{ i_m \cdot i_b }{ \left \| i_m \right \| \left \| i_b \right \| }
= \frac{ \sum x_{a,m} x_{a,b} }{ \sqrt{ \sum x_{a,m}^2 \sum x_{a,b}^2 } }$$
Then, the similarity between two users is given by $$\hat{x}{k,m} = \bar{x}{k} + \frac{\sum\limits_{u_a} s_u^{cos}(u_k,u_a) (x_{a,m})}{\sum\limits_{u_a}|s_u^{cos}(u_k,u_a)|}$$ and for items given by $$\hat{x}{k,m} = \frac{\sum\limits{i_b} s_u^{cos}(i_m,i_b) (x_{k,b}) }{\sum\limits_{i_b}|s_u^{cos}(i_m,i_b)|}$$
Use these ideas to construct a class cos_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus: Use adjusted cosine similiarity.
End of explanation
# Libraries
import pandas as pd
import numpy as np
import scipy.sparse as sp
from scipy.sparse.linalg import svds
class svd_engine:
def __init__(self, data_all, k = 6):
Constructor for svd_engine class
Args:
k: The number of latent variables to fit
self.k = k
# Create copy of data
self.data_all = data_all.copy()
# Now build a list of movies each you has seen
self.seen = []
for user in data_all.user_id.unique():
cur_seen = {}
cur_seen["user"] = user
cur_seen["seen"] = self.data_all[data_all.user_id == user].item_id
self.seen.append(cur_seen)
def fit(self, data_train):
Performs SVD on a sparse matrix data_train
Args:
data_train: A pandas data frame data to estimate SVD
Returns:
Matricies u, s, and vt of SVD
# Save local copy of data
self.data_train = data_train.copy()
# Compute adjusted matrix
self.user_means = self.data_train.replace(0, np.nan).mean(axis = 1)
self.item_means = self.data_train.T.replace(0, np.nan).mean(axis = 1)
self.data_train_adj = (self.data_train.replace(0, np.nan).T - self.user_means).fillna(0).T
# Save the indices and column names
self.users = data_train.index
self.items = data_train.columns
# Train the model
self.u, self.s, self.vt = svds(self.data_train_adj, k = self.k)
return(self.u, np.diag(self.s), self.vt)
def predict(self):
Predicts using SVD
Returns:
A pandas dataframe containing the prediction values
# Store prediction locally and turn to dataframe, adding the mean back
self.pred = pd.DataFrame(self.u @ np.diag(self.s) @ self.vt,
index = self.users,
columns = self.items)
self.pred = self.user_means[:, np.newaxis] + self.pred
return(self.pred)
def test(self, data_test, root = False):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
# Build a list of common indices (users) in the train and test set
row_idx = list(set(self.pred.index) & set(data_test.index))
# Prime the variables for loop
err = [] # To hold the Sum of Squared Errors
N = 0 # To count predictions for MSE calculation
for row in row_idx:
# Get the rows
test_row = data_test.loc[row, :]
pred_row = self.pred.loc[row, :]
# Get indices of nonzero elements in the test set
idx = test_row.index[test_row.nonzero()[0]]
# Get only common movies
temp_test = test_row[idx]
temp_pred = pred_row[idx]
# Compute error and count
temp_err = ((temp_pred - temp_test)**2).sum()
N = N + len(idx)
err.append(temp_err)
mse = np.sum(err) / N
# Switch for RMSE
if root:
err = np.sqrt(mse)
else:
err = mse
return(err)
def recommend(self, user, num_recs):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
# Get list of already seen movies for this user
idx_seen = next(item for item in self.seen if item["user"] == 2)["seen"]
# Remove already seen movies and recommend
rec = self.pred.loc[user, :].drop(idx_seen).nlargest(num_recs)
return(rec.index)
# Testing
svd_en = svd_engine(data_movie_raw, k = 20)
svd_en.fit(data_train)
svd_en.predict()
err = svd_en.test(data_test, root = True)
rec = svd_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for user 1:", rec.values)
Explanation: SVD
Above we used Cosine Similarity to fill the holes in our sparse matrix. Another, and much more popular, method for matrix completion is called a Singluar Value Decomposition. SVD factors our data matrix into three smaller matricies, given by $$\textbf{M} = \textbf{U} \bf{\Sigma} \textbf{V}^*$$ where $\textbf{M}$ is our data matrix, $\textbf{U}$ is a unitary matrix containg the latent variables in the user space, $\bf{\Sigma}$ is diagonal matrix containing the singular values of $\textbf{M}$, and $\textbf{V}$ is a unitary matrix containing the latent variables in the item space. For more information on the SVD see the Wikipedia article.
Numpy contains a package to estimate the SVD of a sparse matrix. By making estimates of the matricies $\textbf{U}$, $\bf{\Sigma}$, and $\textbf{V}$, and then by multiplying them together, we can reconstruct an estimate for the matrix $\textbf{M}$ with all the holes filled in.
Use these ideas to construct a class svd_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus: Tune any parameters.
End of explanation
# Parameter tuning
import matplotlib.pyplot as plt
err = []
for cur_k in range(1, 50):
svd_en = svd_engine(data_movie_raw, k = cur_k)
svd_en.fit(data_train)
svd_en.predict()
err.append(svd_en.test(data_test))
plt.plot(range(1, 50), err)
plt.title('RMSE versus k')
plt.xlabel('k')
plt.ylabel('RMSE')
plt.show()
err.index(min(err))
Explanation: Overall RMSE of about 0.98
End of explanation
# Build the engine
svd_en = svd_engine(data_movie_raw, k = 7)
svd_en.fit(data_train)
svd_en.predict()
# Now make recommendations
recs = []
for user in data_movie_raw.user_id.unique():
temp_rec = svd_en.recommend(user, 5)
recs.append(temp_rec)
recs[0]
Explanation: 7 is the optimal value of k in this case. Note that no cross-validation was performed!
Now we'll build the best recommender and recommend 5 movies to each user.
End of explanation
<END_TASK> |
15,915 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Unsupervised Analysis of Days of Week
Treating crossing each day as features to learn about the relatinships between various days.
Step1: Downloading Data
We'll start by downloading the data (available on seattle.gov).
Step2: Principal Component Analysis
Step3: Comparing with Day of Week | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Unsupervised Analysis of Days of Week
Treating crossing each day as features to learn about the relatinships between various days.
End of explanation
from urllib import request
FREMONT_URL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
request.urlretrieve(FREMONT_URL, 'Fremont.csv')
# magic function to show the content of the file
%more Fremont.csv
import pandas as pd
df = pd.read_csv('Fremont.csv') # use read_csv to load the data into dataframe
df.head()
# Let's see the type of the data
df.dtypes
# change the Date column to datetime data type
df['Date'] = pd.to_datetime(df['Date'])
df.head()
df.dtypes
# Set the index to Date
df.set_index('Date', inplace=True)
df.head()
df.apply(lambda x: sum(x.isnull()))
# clear the data by delete the non-numeric
df.dropna(inplace=True)
df.apply(lambda x: sum(x.isnull()))
df.columns
df.plot()
df.resample('W').sum().plot()
df.columns=['West', 'East']
df.resample('w').sum().plot()
# To see whether there is any annual trend of the number of rides
df.resample('D').sum().rolling(365).sum().plot()
# each point is the sum of the number of rides in the previuos 365 days
# The y coordinate is not from 0
ax = df.resample('D').sum().rolling(365).sum().plot()
ax.set_ylim(0, None)
# DateimeIndex.time return numpy array of datetime.time, the time part of the Timestamps
df.groupby(df.index.time).mean().plot()
# plot the average of rides at each hours of the day
# Create the pivoted table to investigate the pattern in each day
df['Total'] = df['West'] + df['East']
pivoted = df.pivot_table(values='Total', index=df.index.time, columns=df.index.date)
pivoted.head()
pivoted.shape
# delete the date with non-numeric
pivoted.dropna(axis=1, inplace=True)
pivoted.shape
pivoted.plot(legend=False)
# add transparent parameter alpha
pivoted.plot(legend=False, alpha=0.01)
Explanation: Downloading Data
We'll start by downloading the data (available on seattle.gov).
End of explanation
# Get X with hours as mearsurement and date as observations
X = pivoted.T.values
X.shape
X
from sklearn.decomposition import PCA
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
# use cluster algorithm Gaussian mixture model
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
labels
# plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
# plt.colorbar()
plt.scatter(X2[:, 0], X2[:, 1], c=labels)
plt.colorbar()
labels
# so labels == 1 represents the weekday
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.01)
# labels == 0 represents the weekend or holiday
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1)
Explanation: Principal Component Analysis
End of explanation
pd.DatetimeIndex(pivoted.columns)
# The DatetimeIndex.dayof week gives the day of the week
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
dayofweek
# Then we plot the color of the weekday
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek)
plt.colorbar()
# grab the day in label 0 which is not weekend
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 0) & (dayofweek < 5)]
Explanation: Comparing with Day of Week
End of explanation
<END_TASK> |
15,916 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
OCCT Bottle Tutorial
Step1: VTK Viewer
The following summarizes the mouse and keyboard commands for interacting with shapes rendered in the viewer.
Keypress j / Keypress t | <ASSISTANT_TASK:>
Python Code:
height = 70.
width = 50.
thickness = 30.
pnt1 = [-width/2., 0., 0.]
pnt2 = [-width/2., -thickness/4., 0.]
pnt3 = [0., -thickness/2., 0.]
pnt4 = [width/2., -thickness/4., 0.]
pnt5 = [width/2., 0., 0.]
edge1 = Edge().createLine(start=pnt1, end=pnt2)
edge2 = Edge().createArc3P(start=pnt2, end=pnt4, pnt=pnt3)
edge3 = Edge().createLine(start=pnt4, end=pnt5)
halfProfile = Wire([edge1, edge2, edge3])
mirrorPlane = Plane(origin=[0,0,0], xaxis=[1,0,0], yaxis=[0,0,1])
mirrorProfile = halfProfile.mirror(mirrorPlane, copy=True)
allEdges = list(EdgeIterator(halfProfile)) + list(EdgeIterator(mirrorProfile))
fullProfile = Wire().createWire(allEdges)
bottomFace = Face().createFace(fullProfile)
body = Solid().extrude(bottomFace, (0, 0, 0), (0, 0, height))
body.fillet(thickness/12.)
neckHeight = height/10
neckRadius = thickness/4
neck = Solid().createCylinder([0,0,0], [0,0,neckHeight], radius=neckRadius)
neck.translate([0, 0, height])
body.fuse(neck)
zMax = -1
neckTopFace = None
for f in FaceIterator(body):
[x, y , z] = f.centreOfMass()
if z >= zMax:
neckTopFace = f
zMax = z
body.shell(thickness/50., [neckTopFace], tolerance=1E-3)
t_thick = neckHeight/5
t_height = neckHeight - t_thick
t_radius = neckRadius + t_thick/4
t_pitch = t_height/2
t_angle = 0
# Note the following thread geometry is not correct. The profile
# is wrong and there is a twist added to the profile. But it's
# kind of close and good enough for this example.
threadHelix = Edge().createHelix(pitch=t_pitch,
height=t_height,
radius=t_radius,
angle = t_angle)
threadFace = Face().createPolygonal([[0, 0, t_thick/2],
[t_thick, .0, 0],
[0, 0, -t_thick/2]])
threadFace.translate([t_radius, 0, 0])
thread = Solid().pipe(threadFace, threadHelix)
thread.translate([0, 0, height])
body.fuse(thread)
actor = body.toVtkActor()
Explanation: OCCT Bottle Tutorial
End of explanation
try:
a = get_QApplication([])
except:
pass
vtkWin = SimpleVtkViewer()
vtkWin.add_actor(actor)
# If the VTK window is blank/white, click on the window and hit 'r' to zoom to fit.
Explanation: VTK Viewer
The following summarizes the mouse and keyboard commands for interacting with shapes rendered in the viewer.
Keypress j / Keypress t: toggle between joystick (position sensitive) and trackball (motion sensitive) styles. In joystick style, motion occurs continuously as long as a mouse button is pressed. In trackball style, motion occurs when the mouse button is pressed and the mouse pointer moves.
Keypress c / Keypress a: toggle between camera and actor modes. In camera mode, mouse events affect the camera position and focal point. In actor mode, mouse events affect the actor that is under the mouse pointer.
Button 1: rotate the camera around its focal point (if camera mode) or rotate the actor around its origin (if actor mode). The rotation is in the direction defined from the center of the renderer's viewport towards the mouse position. In joystick mode, the magnitude of the rotation is determined by the distance the mouse is from the center of the render window.
Button 2: pan the camera (if camera mode) or translate the actor (if object mode). In joystick mode, the direction of pan or translation is from the center of the viewport towards the mouse position. In trackball mode, the direction of motion is the direction the mouse moves. (Note: with 2-button mice, pan is defined as <Shift>Button 1.)
Button 3: zoom the camera (if camera mode) or scale the actor (if object mode). Zoom in/increase scale if the mouse position is in the top half of the viewport; zoom out/decrease scale if the mouse position is in the bottom half. In joystick mode, the amount of zoom is controlled by the distance of the mouse pointer from the horizontal centerline of the window.
Keypress 3: toggle the render window into and out of stereo mode. By default, red-blue stereo pairs are created. Some systems support Crystal Eyes LCD stereo glasses; you have to invoke SetStereoTypeToCrystalEyes() on the rendering window object. Note: to use stereo you also need to pass a stereo=1 keyword argument to the render window object constructor.
Keypress e: exit the application.
Keypress f: fly to the picked point
Keypress p: perform a pick operation. The render window interactor has an internal instance of vtkCellPicker that it uses to pick.
Keypress r: reset the camera view along the current view direction. Centers the actors and moves the camera so that all actors are visible.
Keypress s: modify the representation of all actors so that they are surfaces.
Keypress u: invoke the user-defined function. Typically, this keypress will bring up an interactor that you can type commands in.
Keypress w: modify the representation of all actors so that they are wireframe.
End of explanation
<END_TASK> |
15,917 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Wind energy production forecast from Met.no weather forecast
With this notebook we illustrate how one might improve weather forecast for wind energy production, considering that the height of wind turbines doesn’t match the height of wind speed commonly used in weather forecasts. Also, that wind energy production does not depend only on wind speed, but also air density.
In this Notebook we will use three weather forecast from Planet OS Datahub
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: We use a simple dh_py_access library for controlling REST requests.
Step3: dh_py_access provides some easy to use functions to list dataset metadata
Step4: Initialize coordinates and reftime. Please note that more recent reftime, than computed below, may be available for particular dataset. We just use conservative example for demo
Step5: Fetch data and convert to Pandas dataframe
Step6: Filter out necessary data from the DataFrames
Step7: We easily see that differences between models can be larger than difference of 10m to 80m winds in the same model.
What role does density play in energy production?
From the theory we know that wind energy production is roughly
$1/2 A \rho \mathbf{v}^3$,
where $A$ is area, $\rho$ is air density and $\mathbf{v}$ is wind speed. We are not concerned about $A$, which is a turbine parameter, but we can analyse the linear relation of density and cube relation of wind speed itself.
First, let's see how the density varies over time
Step8: No let's energy production looks compared to the wind speed
Step9: Finally, let's analyse how much density changes vary over the whole domain during one forecast. For this purpose, we download the whole density field with package api
Step10: Maximum relative change of air density in single location is | <ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import urllib.request
import numpy as np
import simplejson as json
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import warnings
import datetime
import dateutil.parser
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
import requests
from netCDF4 import Dataset
from dh_py_access.lib.dataset import dataset as dataset
import dh_py_access.package_api as package_api
import dh_py_access.lib.datahub as datahub
np.warnings.filterwarnings('ignore')
Explanation: Wind energy production forecast from Met.no weather forecast
With this notebook we illustrate how one might improve weather forecast for wind energy production, considering that the height of wind turbines doesn’t match the height of wind speed commonly used in weather forecasts. Also, that wind energy production does not depend only on wind speed, but also air density.
In this Notebook we will use three weather forecast from Planet OS Datahub:
GFS - A global model produced by NOAA that provides a 10-day forecast at a resolution of 0.25 degrees (15 days with lower resolution and fewer variables);
FMI HIRLAM - A regional model limited to Northern-Europe and Northern-Atlantic with a forecast period of two days and resolution of 0.07 degrees;
Met.no HARMONIE & Met.no HARMONIE dedicated wind forecast A regional model limited to Scandinavia with a resolution of 0.025 x 0.05 degrees. The dedicated wind forecast version is augmented by Planet OS to include the air density as explicit variable.
API documentation is available at http://docs.planetos.com. If you have questions or comments, join the Planet OS Slack community to chat with our development team. For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/
End of explanation
API_key = open("APIKEY").read().strip()
dh=datahub.datahub_main(API_key)
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
fmi_hirlam_surface = dataset('fmi_hirlam_surface',dh)
metno_harmonie_metcoop = dataset('metno_harmonie_metcoop',dh)
metno_harmonie_wind = dataset('metno_harmonie_wind_det',dh)
gfs = dataset('noaa_gfs_pgrb2_global_forecast_recompute_0.25degree',dh)
Explanation: We use a simple dh_py_access library for controlling REST requests.
End of explanation
## Does not look good in github
##gfs.variable_names()
Explanation: dh_py_access provides some easy to use functions to list dataset metadata
End of explanation
lon = 22
lat = 59+6./60
today = datetime.datetime.today()
reft = datetime.datetime(today.year,today.month,today.day,int(today.hour/6)*6) - datetime.timedelta(hours=12)
reft = reft.isoformat()
##reft = "2018-02-11T18:00:00"
arg_dict = {'lon':lon,'lat':lat,'reftime_start':reft,'reftime_end':reft,'count':250}
arg_dict_metno_wind_det = dict(arg_dict, **{'vars':'wind_u_z,wind_v_z,air_density_z'})
arg_dict_metno_harm_metcoop = dict(arg_dict, **{'vars':'u_wind_10m,v_wind_10m'})
arg_dict_hirlam = dict(arg_dict, **{'vars':'u-component_of_wind_height_above_ground,v-component_of_wind_height_above_ground'})
arg_dict_gfs = dict(arg_dict, **{'vars':'ugrd_m,vgrd_m','count':450})
Explanation: Initialize coordinates and reftime. Please note that more recent reftime, than computed below, may be available for particular dataset. We just use conservative example for demo
End of explanation
dmw = metno_harmonie_wind.get_json_data_in_pandas(**arg_dict_metno_wind_det)
dmm = metno_harmonie_metcoop.get_json_data_in_pandas(**arg_dict_metno_harm_metcoop)
dhs = fmi_hirlam_surface.get_json_data_in_pandas(**arg_dict_hirlam)
dgfs = gfs.get_json_data_in_pandas(**arg_dict_gfs)
## show how to filter Pandas
## dgfs[dgfs['z']==80]
Explanation: Fetch data and convert to Pandas dataframe
End of explanation
vel80_metno = np.array(np.sqrt(dmw[dmw['z']==80]['wind_u_z']**2 + dmw[dmw['z']==80]['wind_v_z']**2))
vel10_metno = np.array(np.sqrt(dmm['u_wind_10m']**2 + dmm['v_wind_10m']**2))
vel10_hirlam = np.array(np.sqrt(dhs['u-component_of_wind_height_above_ground']**2 +
dhs['v-component_of_wind_height_above_ground']**2))
vel10_gfs = np.sqrt(dgfs[dgfs['z']==10]['ugrd_m']**2+dgfs[dgfs['z']==10]['vgrd_m']**2)
vel80_gfs = np.sqrt(dgfs[dgfs['z']==80]['ugrd_m']**2+dgfs[dgfs['z']==80]['vgrd_m']**2)
t_metno = [dateutil.parser.parse(i) for i in dmw[dmw['z']==80]['time']]
t_metno_10 = [dateutil.parser.parse(i) for i in dmm['time']]
t_hirlam = [dateutil.parser.parse(i) for i in dhs['time']]
t_gfs_10 = [dateutil.parser.parse(i) for i in dgfs[dgfs['z']==10]['time']]
t_gfs_80 = [dateutil.parser.parse(i) for i in dgfs[dgfs['z']==80]['time']]
fig, ax = plt.subplots()
days = mdates.DayLocator()
daysFmt = mdates.DateFormatter('%Y-%m-%d')
hours = mdates.HourLocator()
ax.set_ylabel("wind speed")
ax.plot(t_metno, vel80_metno, label='Metno 80m')
ax.plot(t_metno_10, vel10_metno, label='Metno 10m')
ax.plot(t_hirlam, vel10_hirlam, label='HIRLAM 10m')
gfs_lim=67
ax.plot(t_gfs_10[:gfs_lim], vel10_gfs[:gfs_lim], label='GFS 10m')
ax.plot(t_gfs_80[:gfs_lim], vel80_gfs[:gfs_lim], label='GFS 80m')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
#fig.autofmt_xdate()
plt.legend()
plt.grid()
plt.savefig("model_comp")
Explanation: Filter out necessary data from the DataFrames
End of explanation
fig, ax = plt.subplots()
ax2 = ax.twinx()
days = mdates.DayLocator()
daysFmt = mdates.DateFormatter('%Y-%m-%d')
hours = mdates.HourLocator()
ax.plot(t_metno,vel80_metno)
aird80 = dmw[dmw['z']==80]['air_density_z']
ax2.plot(t_metno,aird80,c='g')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
ax.set_ylabel("wind speed")
ax2.set_ylabel("air density")
fig.tight_layout()
#fig.autofmt_xdate()
plt.savefig("density_80m")
Explanation: We easily see that differences between models can be larger than difference of 10m to 80m winds in the same model.
What role does density play in energy production?
From the theory we know that wind energy production is roughly
$1/2 A \rho \mathbf{v}^3$,
where $A$ is area, $\rho$ is air density and $\mathbf{v}$ is wind speed. We are not concerned about $A$, which is a turbine parameter, but we can analyse the linear relation of density and cube relation of wind speed itself.
First, let's see how the density varies over time
End of explanation
fig, ax = plt.subplots()
ax2 = ax.twinx()
ax2.set_ylabel("energy production")
ax.set_ylabel("wind speed")
ax.plot(t_metno,vel80_metno, c='b', label='wind speed')
ax2.plot(t_metno,aird80*vel80_metno**3, c='r', label='energy prod w.dens')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
fig.autofmt_xdate()
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc=0)
Explanation: No let's energy production looks compared to the wind speed
End of explanation
density = package_api.package_api(dh,'metno_harmonie_wind_det','air_density_z',-20,60,10,80,'full_domain_harmonie')
density.make_package()
density.download_package()
density_data = Dataset(density.get_local_file_name())
## biggest change of density in one location during forecast period
maxval = np.nanmax(density_data.variables['air_density_z'],axis=0)
minval = np.nanmin(density_data.variables['air_density_z'],axis=0)
Explanation: Finally, let's analyse how much density changes vary over the whole domain during one forecast. For this purpose, we download the whole density field with package api
End of explanation
print(np.nanmax(maxval-minval),np.nanmean(maxval-minval))
Explanation: Maximum relative change of air density in single location is:
End of explanation
<END_TASK> |
15,918 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
596
Step1: Discovering available alignments
Alignments for a specified transcript
This approach identifies available alignments and then selects the desired one to use with VariantMapper
Step2: Alignments for a gene
Step3: Alignments for a genomic region (new method)
The hgvs data provider method get_tx_for_region used a supplied alignment method, which necessarily filtered alignments for a specific alignment method. A new method, get_alignments_for_region provides similar functionality with an optional alt_aln_method parameter.
When alt_aln_method is None (or not provided), all alignments are returned.
When alt_aln_method is provided, it behaves exactly like get_tx_for_region.
Step4: Alternate method for transcript-to-genome projections | <ASSISTANT_TASK:>
Python Code:
from hgvs.easy import (__version__, parser, hdp, vm)
from hgvs.exceptions import HGVSDataNotAvailableError
__version__
# hgvs_g = "NC_000010.11:g.94762693G>A" # GRCh38
hgvs_g = "NC_000010.10:g.96522450G>A" # GRCh37
hgvs_c = "NM_000769.4:c.-13G>A"
var_c = parser.parse(hgvs_c)
var_g = parser.parse(hgvs_g)
Explanation: 596: Support for custom transcripts
See https://github.com/biocommons/hgvs/issues/596
UTA#220 added support for custom transcripts, which really means custom alignments associated with a new alt_aln_method, "splign-manual". The loading/data/splign-manual contains examples.
Although hgvs supports multiple alignment methods already (e.g., one can specify "blat", "splign" in VariantMapper::g_to_c()), several areas of code assume one default alt_aln_method.
This work was sponsored by Invitae, which uses the VariantMapper interface. Therefore, this new feature focuses on minimal changes to extend hgvs to better support using multiple alignment methods with VariantMapper. These are demonstrated below.
Setup
Initialize hgvs and create two variants from CYP2C19 to use as examples (VCV000634882.1)
End of explanation
hdp.get_tx_mapping_options(var_c.ac)
# or, for a more complete example with many options:
hdp.get_tx_mapping_options("NM_001807.4")
Explanation: Discovering available alignments
Alignments for a specified transcript
This approach identifies available alignments and then selects the desired one to use with VariantMapper::c_to_g() as above.
End of explanation
alignments = hdp.get_tx_for_gene("CYP2C19")
alignments.sort(key=lambda a: (a["tx_ac"], a["alt_ac"], a["alt_aln_method"]))
alignments
Explanation: Alignments for a gene
End of explanation
start = var_g.posedit.pos.start.base
end = var_g.posedit.pos.end.base
alignments = hdp.get_alignments_for_region(var_g.ac, start, end)
alignments.sort(key=lambda a: (a["tx_ac"], a["alt_ac"], a["alt_aln_method"]))
alignments
Explanation: Alignments for a genomic region (new method)
The hgvs data provider method get_tx_for_region used a supplied alignment method, which necessarily filtered alignments for a specific alignment method. A new method, get_alignments_for_region provides similar functionality with an optional alt_aln_method parameter.
When alt_aln_method is None (or not provided), all alignments are returned.
When alt_aln_method is provided, it behaves exactly like get_tx_for_region.
End of explanation
try:
vm.c_to_g(var_c, "NC_000010.10")
except HGVSDataNotAvailableError as e:
print(f"Got {e!r}")
vm.c_to_g(var_c, "NC_000010.10", alt_aln_method="splign-manual")
Explanation: Alternate method for transcript-to-genome projections: Using try...except
This approach follows the easier-to-ask-for-forgiveness-than-permission principle. Code tries "splign", which is expected to satisfy the majority of cases, and then falls back to "splign-manual" for failures. The advantage of this method is that there is only one database fetch for the most common case.
End of explanation
<END_TASK> |
15,919 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The decimal module implements fixed and floating point arithmetic using the model familiar to most people, rather than the IEEE floating point version implemented by most computer hardware and familiar to programmers. A Decimal instance can represent any number exactly, round up or down, and apply a limit to the number of significant digits.
Decimal
Step1: Decimals can also be created from tuples containing a sign flag
Step2: Formatting
Step3: Arithmetic
Step4: Special Value
Step5: Context
Step6: Precision
Step7: Local Context | <ASSISTANT_TASK:>
Python Code:
import decimal
fmt = '{0:<25}{1:<25}'
print(fmt.format('Input', 'Output'))
print(fmt.format('-'*25, '-'*25))
#Integer
print(fmt.format(5, decimal.Decimal(5)))
#String
print(fmt.format('3.14', decimal.Decimal('3.14')))
#Float
f = 0.1
print(fmt.format(repr(f), decimal.Decimal(str(f))))
print('{:0.23g}{:<25}'.format(f, str(decimal.Decimal.from_float(f))[:25]))
Explanation: The decimal module implements fixed and floating point arithmetic using the model familiar to most people, rather than the IEEE floating point version implemented by most computer hardware and familiar to programmers. A Decimal instance can represent any number exactly, round up or down, and apply a limit to the number of significant digits.
Decimal
End of explanation
import decimal
# Tuple
t = (1, (1, 1), -2)
print('Input :', t)
print('Decimal:', decimal.Decimal(t))
Explanation: Decimals can also be created from tuples containing a sign flag
End of explanation
import decimal
d = decimal.Decimal(1.1)
print('Precision:')
print('{:.1}'.format(d))
print('{:.2}'.format(d))
print('{:.3}'.format(d))
print('{:.18}'.format(d))
print('\nWidth and precision combined:')
print('{:5.1f} {:5.1g}'.format(d, d))
print('{:5.2f} {:5.2g}'.format(d, d))
print('{:5.2f} {:5.2g}'.format(d, d))
print('\nZero padding:')
print('{:05.1}'.format(d))
print('{:05.2}'.format(d))
print('{:05.3}'.format(d))
Explanation: Formatting
End of explanation
import decimal
a = decimal.Decimal('5.1')
b = decimal.Decimal('3.14')
c = 4
d = 3.14
print('a =', repr(a))
print('b =', repr(b))
print('c =', repr(c))
print('d =', repr(d))
print()
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print()
print('a + c =', a + c)
print('a - c =', a - c)
print('a * c =', a * c)
print('a / c =', a / c)
print()
print('a + d =', end=' ')
try:
print(a + d)
except TypeError as e:
print(e)
Explanation: Arithmetic
End of explanation
import decimal
for value in ['Infinity', 'NaN', '0']:
print(decimal.Decimal(value), decimal.Decimal('-' + value))
print()
# Math with infinity
print('Infinity + 1:', (decimal.Decimal('Infinity') + 1))
print('-Infinity + 1:', (decimal.Decimal('-Infinity') + 1))
# Print comparing NaN
print(decimal.Decimal('NaN') == decimal.Decimal('Infinity'))
print(decimal.Decimal('NaN') != decimal.Decimal(1))
Explanation: Special Value
End of explanation
import decimal
context = decimal.getcontext()
print('Emax =', context.Emax)
print('Emin =', context.Emin)
print('capitals =', context.capitals)
print('prec =', context.prec)
print('rounding =', context.rounding)
print('flags =')
for f, v in context.flags.items():
print(' {}: {}'.format(f, v))
print('traps =')
for t, v in context.traps.items():
print(' {}: {}'.format(t, v))
Explanation: Context
End of explanation
import decimal
d = decimal.Decimal('0.123456')
for i in range(1, 5):
decimal.getcontext().prec = i
print(i, ':', d, d * 1)
Explanation: Precision
End of explanation
import decimal
with decimal.localcontext() as c:
c.prec = 2
print('Local precision:', c.prec)
print('3.14 / 3 =', (decimal.Decimal('3.14') / 3))
print()
print('Default precision:', decimal.getcontext().prec)
print('3.14 / 3 =', (decimal.Decimal('3.14') / 3))
import decimal
# Set up a context with limited precision
c = decimal.getcontext().copy()
c.prec = 3
# Create our constant
pi = c.create_decimal('3.1415')
# The constant value is rounded off
print('PI :', pi)
# The result of using the constant uses the global context
print('RESULT:', decimal.Decimal('2.01') * pi)
Explanation: Local Context
End of explanation
<END_TASK> |
15,920 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ModelSelection.ipynb
Choosing the number of states and a suitable timescale for hidden Markov models
One of the challenges associated with using hidden Markov models is specifying the correct model. For example, how many hidden states should the model have? At what timescale should we bin our observations? How much data do we need in order to train an effective/useful/representative model?
One possibility (which is conceptually very appealing) is to use a nonparametric Bayesian extension to the HMM, the HDP-HMM (hierarchical Dirichlet process hidden Markov model), in which the number of states can be directly inferred from the data, and moreover, where the number of states are allowed to grow as we obtain more and more data.
Fortunately, even if we choose to use a simple HMM, model selection is perhaps not as important as one might at first think. More specifically, we will show that for a wide range of model states, and for a wide range of timescales, the HMM should return plausible and usable models, so that we can use them to learn something about the data even if we don't have a good idea of what the model parameters should be.
Nevertheless, shifting over to the HDP-HMMs and especially to the HDP-HSMMs (semi-Markov models) where state durations are explicitly specified or learned, is certainly something that I would highly recommend.
TODO
Step1: Load data
Here we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data.
Step2: Find most appropriate number of states using cross validation
Here we split the data into training, validation, and test sets. We monitor the average log probability per sequence (normalized by length) for each of these sets, and we use the validation set to choose the number of model states $m$.
Note to self
Step3: Remarks
Step4: Remarks
Step5: Remarks
Step6: Remarks
Step7: Remarks
Step8: Remarks
Step9: then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.
This observation is even more pronounced if we increase the number of states further
Step10: With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sys
from IPython.display import display, clear_output
sys.path.insert(0, 'helpers')
from efunctions import * # load my helper function(s) to save pdf figures, etc.
from hc3 import load_data, get_sessions
from hmmlearn import hmm # see https://github.com/ckemere/hmmlearn
import klabtools as klab
import seqtools as sq
import importlib
importlib.reload(sq) # reload module here only while prototyping...
importlib.reload(klab) # reload module here only while prototyping...
%matplotlib inline
sns.set(rc={'figure.figsize': (12, 4),'lines.linewidth': 1.5})
sns.set_style("white")
Explanation: ModelSelection.ipynb
Choosing the number of states and a suitable timescale for hidden Markov models
One of the challenges associated with using hidden Markov models is specifying the correct model. For example, how many hidden states should the model have? At what timescale should we bin our observations? How much data do we need in order to train an effective/useful/representative model?
One possibility (which is conceptually very appealing) is to use a nonparametric Bayesian extension to the HMM, the HDP-HMM (hierarchical Dirichlet process hidden Markov model), in which the number of states can be directly inferred from the data, and moreover, where the number of states are allowed to grow as we obtain more and more data.
Fortunately, even if we choose to use a simple HMM, model selection is perhaps not as important as one might at first think. More specifically, we will show that for a wide range of model states, and for a wide range of timescales, the HMM should return plausible and usable models, so that we can use them to learn something about the data even if we don't have a good idea of what the model parameters should be.
Nevertheless, shifting over to the HDP-HMMs and especially to the HDP-HSMMs (semi-Markov models) where state durations are explicitly specified or learned, is certainly something that I would highly recommend.
TODO: Take a look at e.g. https://www.cs.cmu.edu/~ggordon/siddiqi-gordon-moore.fast-hmm.pdf : fast HMM (order of magnitude faster than Baum-Welch) and better model fit: V-STACS.
Import packages and initialization
End of explanation
datadirs = ['/home/etienne/Dropbox/neoReader/Data',
'C:/etienne/Dropbox/neoReader/Data',
'/Users/etienne/Dropbox/neoReader/Data']
fileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)
animal = 'gor01'; month,day = (6,7); session = '16-40-19' # 91 units
spikes = load_data(fileroot=fileroot, datatype='spikes',animal=animal, session=session, month=month, day=day, fs=32552, verbose=False)
eeg = load_data(fileroot=fileroot, datatype='eeg', animal=animal, session=session, month=month, day=day,channels=[0,1,2], fs=1252, starttime=0, verbose=False)
posdf = load_data(fileroot=fileroot, datatype='pos',animal=animal, session=session, month=month, day=day, verbose=False)
speed = klab.get_smooth_speed(posdf,fs=60,th=8,cutoff=0.5,showfig=False,verbose=False)
Explanation: Load data
Here we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data.
End of explanation
## bin ALL spikes
ds = 0.125 # bin spikes into 125 ms bins (theta-cycle inspired)
binned_spikes_all = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)
## identify boundaries for running (active) epochs and then bin those observations into separate sequences:
runbdries = klab.get_boundaries_from_bins(eeg.samprate,bins=speed.active_bins,bins_fs=60)
binned_spikes_bvr = klab.bin_spikes(spikes.data, fs=spikes.samprate, boundaries=runbdries, boundaries_fs=eeg.samprate, ds=ds)
## stack data for hmmlearn:
seq_stk_bvr = sq.data_stack(binned_spikes_bvr, verbose=True)
seq_stk_all = sq.data_stack(binned_spikes_all, verbose=True)
## split data into train, test, and validation sets:
tr_b,vl_b,ts_b = sq.data_split(seq_stk_bvr, tr=60, vl=20, ts=20, randomseed = 0, verbose=False)
Smax = 40
S = np.arange(start=5,step=1,stop=Smax+1)
tr_ll = []
vl_ll = []
ts_ll = []
for num_states in S:
clear_output(wait=True)
print('Training and evaluating {}-state hmm'.format(num_states))
sys.stdout.flush()
myhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=30, verbose=False)
tr_ll.append( (np.array(list(sq.hmm_eval(myhmm, tr_b)))/tr_b.sequence_lengths ).mean())
vl_ll.append( (np.array(list(sq.hmm_eval(myhmm, vl_b)))/vl_b.sequence_lengths ).mean())
ts_ll.append( (np.array(list(sq.hmm_eval(myhmm, ts_b)))/ts_b.sequence_lengths ).mean())
clear_output(wait=True)
print('Done!')
sys.stdout.flush()
num_states = 35
fig = plt.figure(1, figsize=(12, 4))
ax = fig.add_subplot(111)
ax.annotate('plateau at approx ' + str(num_states), xy=(num_states, -38.5), xycoords='data',
xytext=(-140, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="angle3,angleA=0,angleB=-90"),
)
ax.plot(S, tr_ll, lw=1.5, label='train')
ax.plot(S, vl_ll, lw=1.5, label='validation')
ax.plot(S, ts_ll, lw=1.5, label='test')
ax.legend(loc=2)
ax.set_xlabel('number of states')
ax.set_ylabel('normalized (to single time bin) log likelihood')
ax.axhspan(-38.5, -37.5, facecolor='0.75', alpha=0.25)
ax.set_xlim([5, S[-1]])
Explanation: Find most appropriate number of states using cross validation
Here we split the data into training, validation, and test sets. We monitor the average log probability per sequence (normalized by length) for each of these sets, and we use the validation set to choose the number of model states $m$.
Note to self: I should re-write my data splitting routines to allow me to extract as many subsets as I want, so that I can do k-fold cross validation.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
vth = ii+1
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, verbose=False)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
Explanation: Remarks: We see that the training error is decreasing (equivalently, the training log probability is increasing) over the entire range of states considered. Indeed, we have computed this for a much larger number of states, and the training error keeps on decreasing, whereas both the validation and test errors reach a plateau at around 30 or 35 states.
As expected, the training set has the largest log probability (best agreement with model), but we might expect the test and validation sets to be about the same. For different subsets of our data this is indeed the case, but the more important thing in model selection is that the validation and test sets should have the same shape or behavior, so that we can choose an appropriate model parameter.
However, if we wanted to predict what our log probability for any given sequence would be, then we probably need a little bit more data, for which the test and validation errors should agree more.
Finally, we have also repeated the above analysis when we restricted ourselves to only using place cells in the model, and although the log probabilities were uniformly increased to around $-7$ or $-8$, the overall shape and characteristic behavior were left unchanged, so that model selection could be done either way.
Place field visualization
Previously we have only considered varying the number of model states for model selection, but of course choosing an appropriate timescale is perhaps just as important. We know, for example, that if our timescale is too short (or fast), then most of the bins will be empty, making it difficult for the model to learn appropriate representations and transitions. On the other hand, if our timescale is too coarse (or long or slow) then we will certainly miss SWR events, and we may even miss some behavioral events as well.
Since theta is around 8 Hz for rodents, it might make sense to consider a timescale of 125 ms or even 62.5 ms for behaviorally relevant events, so that we can hope to capture half or full theta cycles in the observations.
One might also reasonably ask: "even though the log probability has been optimized, how do we know that the learned model makes any sense? That is, that the model is plausible and useful?" One way to try to answer this question is to again consider the place fields that we learn from the data. Place field visualization is considered in more detail in StateClustering.ipynb, but here we simply want to see if we get plausible, behaviorally relevant state representations out when choosing different numbers of states, and different timescales, for example.
Place fields for varying velocity thresholds
We train our models on RUN data, so we might want to know how sensitive our model is to a specific velocity threshold. Using a smaller threshold will include more quiescent data, and using a larger threshold will exclude more data from being used to learn in the model.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
num_states = 5 + ii*5
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states), y=1.02)
ax.axis('tight')
saveFigure('posterfigs/numstates.pdf')
Explanation: Remarks: As can be expected, with low velocity thresholds, we see an overrepresentation of the reward locations, and only a relatively small number of states that are dedicated to encoding the position along the track.
Recall that the track was shortened halfway through the recording session. Here, the reward locations for the longer track (first half of the experiment) and shorter track (second half of the experiment) are shown by the ends of the dashed lines.
We notice that at some point, the movement velocity (for fixed state evolution) appears to be constant, and that at e.g. 8 units/sec we see a clear bifurcation in the place fields, so that states encode both positions before and after the track was shortened.
Place fields for varying number of states
Next, we take a look at how the place fields are affected by changing the number of states in the model.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.03125
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
Explanation: Remarks: First, we see that independent of the number of states, the model captures the place field like nature of the underlying states very well. Furthermore, the bifurcation of some states to represent both the first and second halves of the experiment becomes clear with as few as 15 states, but interestingly this bifurcation fades as we add more states to the model, since there is enough flexibility to encode those shifting positions by their own states.
Warning: However, in the case where we have many states so that the states are no longer bimodal, the strict linear ordering that we impose (ordering by peak firing location) can easily mask the underlying structural change in the environment.
Place fields for varying timescales
Next we investigate how the place fields are affected by changing the timescale of our observations. First, we consider timescales in the range of 31.25 ms to 375 ms, in increments of 31.25 ms.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.0625
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
Explanation: Remarks: We notice that we clearly see the bimodal place fields when the timescales are sufficiently small, with a particularly clear example at 62.5 ms, for example. Larger timescales tend to focus on the longer track piece, with a single trajectory being skewed away towards the shorter track piece.
Next we consider timescales in increments of 62.5 ms.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 25
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
saveFigure('posterfigs/expsplit.pdf')
Explanation: Remarks: Again, we see that with larger timescales, the spatial resolution becomes more coarse, because we don't have that sufficiently many observations, and the modes of the place fields tend to lie close to those associated wit the longer track.
Splitting the experimment in half
Just as a confirmation of what we've seen so far, we next consider the place fields obtained when we split the experiment into its first and second halves, correponding to when the track was longer, and shorter, respectively.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 45
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
Explanation: Remarks: We clearly see the bimodal place fields when we use all of the data, and we see the unimodal place fields emerge as we focus on either the first, or the second half of the experiment.
Notice that the reward locations are more concentrated, but that the velocity (with fixed state progression) is roughly constant.
However, if we increase the number of states:
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 100
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
Explanation: then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.
This observation is even more pronounced if we increase the number of states further:
End of explanation
import matplotlib.patches as patches
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[stateorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[stateorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 13], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 7], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.plot([35, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([41, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[stateorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 13], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 7], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.plot([35, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([41, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
fig.suptitle('State ordering not by peak location, but by the state transition probability matrix', y=1.08, fontsize=14)
saveFigure('posterfigs/zigzag.pdf')
state_pos_b[state_pos_b < np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 0
state_pos_b[state_pos_b == np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 1
state_pos_1[state_pos_1 < np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 0
state_pos_1[state_pos_1 == np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 1
state_pos_2[state_pos_2 < np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 0
state_pos_2[state_pos_2 == np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 1
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
Explanation: With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals.
End of explanation
<END_TASK> |
15,921 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basics of Marvin
In this notebook, you will learn the common core functionality across many of the Marvin Tools. This includes the basics of accessing and handling MaNGA data from different locations, as well as a beginners guide of interacting with data via the core tools of Cubes and Maps. Much of this information can be found in the Getting Started section of the Marvin documentation.
Table of Contents
Data Access Modes <br>
Interacting with Data Content <br>
Object Datamodels <br>
Extracting Data <br>
Accessing Associated Objects<br>
Quality and Target Flags<br>
NSA and DAPall Metadata<br>
Downloading Files<br>
Interacting with Spaxels<br>
<a id="common"></a>
Common Core Functionality
Marvin Tools provides the core functionality for accessing MaNGA data with Marvin. At the lowest level they are class wrappers around data products or elements (datacubes, maps, and spaxels). Their purpose is to provide a more natural way to interact with the data, unconstrained by specific data storage architectures such as files or databases, or by data location such as a laptop or remote server.
All Marvin Tools are built off a set of common core functionalities and features, and thus behave in very similar ways. There are four main classes provided in Marvin
Step1: You may see the info message No release version set. Setting default to DR15. The default data release Marvin uses is DR15. For this tutorial, we will be using DR15 data so there is no need to do anything further. In general, to change which release Marvin uses, we can use the Marvin config.setRelease method on the config object.
Step2: <a id="modes"></a>
Data Access Modes
Let's instantiate our first cube, for galaxy with plate-IFU designation 8485-1901. Each marvin tool takes a string id input and can one of
Step3: Once instantiated, each marvin tool indicates a mode and a data-origin. Note that mode="local" and data_origin="file". This indicates that we have accessed a file from our local filesystem. Our cube now has a filename attribute indicating its location on disk.
Step4: What if we won't have the MaNGa data products for our target of interest? We can specify the plateifu or mangaid of the target and Marvin will access it remotely. Let's open a cube for 8485-1902.
Step5: Now note that mode="remote" and data_origin="api". This indicates the cube has been remotely loaded using the built-in Marvin API. By specifying an input plateifu or mangaid, marvin will always first try to access the data locally, and then fall back to remote access. You can also explicitly load a file using the filename keyword argument.
Step6: <a id="interaction"></a>
Interacting with Data Content
Regardless of how you load an object, the way we interact with all the tools, and the way they behave, is the same. They also all provide some quick access to basic metadata, like coordinates, header information, and WCS.
<a id="metadata"></a>
Step7: <a id="datamodel"></a>
Object Datamodels
Every Marvin object comes with a built-in datamodel, which shows us what extensions are available, how they are names, and what they contain.
Step8: The datamodel for a cube contains 3 datacube extensions and 2 spectral extensions. Let's look at them more closely.
Step9: This tells us that this cube has three associated 3D datacubes, flux, dispersion, and dispersion_prepixel, and two associated spectra, spectral_resolution and spectral_resolution_prepixel, as well as their associated units. The description attribute provides information on what the extension is.
Step10: The names of each of the datacubes and spectral extensions are available as object attributes and can be used to access the associated data quantity. Let's access the flux for our current cube.
Step11: The flux is represented as a 3D array with units. We can also access the inverse variance and the mask using flux.ivar and flux.mask, respectively. Marvin uses Astropy quantities to represent all internal multidimensional data, e.g. 3-d datacube and 2-d map arrays, or 1-d spectral data.
<a id="extract"></a>
Extracting data
All marvin arrays can be sliced to extract subsets of data. Slices of datacubes produce new datacubes. Slices of spectra produce new spectra. A single slice from a datacube extracts a spectrum. Each spectrum can be easily displayed with its plot attribute.
Step12: <a id="associates"></a>
Accessing Associated Objects
For a given target galaxy, there are many associated MaNGA Data Products from both the reduction and analysis pipelines. Marvin seamlessly connects these together so you don't have to access them individually. Let's access the DAP Maps objects associated with our cube. To do so we use the getMaps method on cube. All instances have getXXX methods that allow you to access associated data objects for a given target.
Step13: When accessing associated objects, Marvin also smartly determines whether to open the file locally or remotely grab it over the API. In our case, the mode is local and data_origin is file. DAP Maps have different bintypes. The default binning is HYB10, a hybrid binning scheme to S/N~10. To load a different bintype, use the bintype keyword argument in the cube.getMaps method.
A Maps behaves very similarly to a Cube and everything we have discussed above will still work. Instead of datacubes and spectra, a Maps object contains a set of 2D quantities called Map, each one of them representing a different property measured by the DAP. One can get a full list of all the properties available using the datamodel.
Step14: Note that some properties such as spx_skycoo have multiple channels (in this case the on-sky x and y coordinates). To access the individual property, simply combine the property name and the channel name with an underscore, i.e. spx_skycoo_on_sky_x. As before, we can get more information about a property using the description attribute.
Step15: More information about Marvin datamodels can be found here. See the DR15 Datamodel for a full description of the available datamodels for DR15.
We can retrieve the map associated to a specific property directly from the Maps instance. For example, let’s get the Hα emission line flux (fitted by a Gaussian). As Map quantities are similar to DataCube, we can easily plot the 2d map.
Step16: Note that the plot method returns the matplotlib Figure and Axes for the plot. We can use those to modify or save the plot. Marvin plotting routines try to select the best parameters, colour maps, and dynamic ranges. You can modify those by passing extra arguments to plot. You can learn more in the Map plotting section. We will talk about the Map class in detail in Working with Astropy Quantities and in Map.
Remember that all associated objects are connected together. They can be accessed either from within an object instance or loaded independently from the class itself. From within an instance, the core tools, Maps, RSS, and ModelCube are available using the getXXX methods, as well as additional auxillary tools and object data, such as Image.
Step17: <a id="flags"></a>
Quality and Target Flags and Maskbits
Each MaNGA target has associated quality assurance flags as well targeting flags. Quality flags indiciate the overall quality of the data and/or any problems with the data discovered during the reduction and analysis pipeline runs.
Step18: In this case the MANGA_DRP3QUAL maskbit does not have any bit activated, which means the data is safe to use. See the Maskbits section for more information on how to interact with Marvin Maskbits.
The targeting flags highlight the targeting bits set for this galaxy and tell you which sample the target belongs to.
Step19: Note that in this case the galaxy belongs to the secondary sample from the final target selection (SECONDARY_v1_2_0) as well as to the secondary samples from several commissioning target selections. The galaxy does not have any ancillary bit (manga_target3). An manga_target3 bit set indicates the target belongs to one of the MaNGA Ancillary programs.
<a id="mixin"></a>
NSA, DAPall, and Value-Added catalog data
For each target we can also access additional catalog data
Step20: SDSS members produce Value-Added Catalogs (VACS) of MaNGA products. These products can be incorporated into Marvin. Any incorporated VACS in Marvin are contained in the vacs attribute attached to each tool. More information on VACs in Marvin can be found here. Currently the MaNGA-HI VAC for DR15 has been contributed into Marvin. Let's take a look at it.
Step21: vacs is a container (list) of all names of the vacs contributed into Marvin. These names are dottable to access to vac data. These contributed custom products can be of any format chosen by the VAC owner, a Python class, a dictionary, an array, etc.
Step22: <a id='download'></a>
Downloading files
While Marvin allows you to access data remotely, frequently you will find that you want to download the file associated to an object so that you can access it more quickly in the future. We can do that using the download method attached to all marvin core tools. Let’s try to load a cube that we know we do not have in out hard drive, and download it.
Step23: Each tool contains a download method to download that individual file. If you want to download a list of targets in bulk, you can use the downloadList utility function. This function uses the built-in Python product sdss_access to download files from the Utah SAS. downloadList accepts as input a list of target ids, e.g. plateifus or mangaids. The dltype keyword indicates the type of file you wish to download. The default file is cube. Let's use the utility function to download a list of DAP MAPS files for three targets.
Step24: <a id="spaxels"></a>
Interacting with Spaxels
All of the core Marvin Galaxy Tools have the ability to extract one ore more Spaxel objects. Spaxel objects are special Marvin objects to make it easier to deal with individual spaxels in galaxies and access their properties. See the Getting Started with Spaxels for more information on how to interact with Spaxels. We can either use the standard array slicing notation (0-indexed, origin of coordinates in the lower left corner of the array)
Step25: or we can use getSpaxel method, which accepts multiple arguments. Note that by default, (x, y) coordinates passed to getSpaxel are measured from the center of the array. | <ASSISTANT_TASK:>
Python Code:
from marvin.tools import Cube
Explanation: Basics of Marvin
In this notebook, you will learn the common core functionality across many of the Marvin Tools. This includes the basics of accessing and handling MaNGA data from different locations, as well as a beginners guide of interacting with data via the core tools of Cubes and Maps. Much of this information can be found in the Getting Started section of the Marvin documentation.
Table of Contents
Data Access Modes <br>
Interacting with Data Content <br>
Object Datamodels <br>
Extracting Data <br>
Accessing Associated Objects<br>
Quality and Target Flags<br>
NSA and DAPall Metadata<br>
Downloading Files<br>
Interacting with Spaxels<br>
<a id="common"></a>
Common Core Functionality
Marvin Tools provides the core functionality for accessing MaNGA data with Marvin. At the lowest level they are class wrappers around data products or elements (datacubes, maps, and spaxels). Their purpose is to provide a more natural way to interact with the data, unconstrained by specific data storage architectures such as files or databases, or by data location such as a laptop or remote server.
All Marvin Tools are built off a set of common core functionalities and features, and thus behave in very similar ways. There are four main classes provided in Marvin: the Cube, the RSS, the Maps, and the ModelCube.
We will start by focusing on a Marvin Cube to demonstrate the common features and handling of Marvin Tools. Everything discussed here also applies to any of the other available Marvin Tools listed above.
Let's start by importing the Marvin Cube class.
End of explanation
from marvin import config
# set the release to DR15
config.setRelease("DR15")
Explanation: You may see the info message No release version set. Setting default to DR15. The default data release Marvin uses is DR15. For this tutorial, we will be using DR15 data so there is no need to do anything further. In general, to change which release Marvin uses, we can use the Marvin config.setRelease method on the config object.
End of explanation
# instantiate a cube for 8485-1901
cube = Cube('8485-1901')
cube
Explanation: <a id="modes"></a>
Data Access Modes
Let's instantiate our first cube, for galaxy with plate-IFU designation 8485-1901. Each marvin tool takes a string id input and can one of: a string filepath and name, a plate-IFU designation, or a manga-ID designation. Marvin will attempt to identify the input format and properly load the target data.
End of explanation
# print the local filename
cube.filename
Explanation: Once instantiated, each marvin tool indicates a mode and a data-origin. Note that mode="local" and data_origin="file". This indicates that we have accessed a file from our local filesystem. Our cube now has a filename attribute indicating its location on disk.
End of explanation
remote_cube = Cube('8485-1902')
remote_cube
Explanation: What if we won't have the MaNGa data products for our target of interest? We can specify the plateifu or mangaid of the target and Marvin will access it remotely. Let's open a cube for 8485-1902.
End of explanation
# Explicitly load a cube from a file on disk
filename = '/Users/Brian/Work/sdss/sas/dr15/manga/spectro/redux/v2_4_3/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
cube = Cube(filename=filename)
cube
Explanation: Now note that mode="remote" and data_origin="api". This indicates the cube has been remotely loaded using the built-in Marvin API. By specifying an input plateifu or mangaid, marvin will always first try to access the data locally, and then fall back to remote access. You can also explicitly load a file using the filename keyword argument.
End of explanation
print('Target Coordindates:', cube.ra, cube.dec)
print('Header:')
# access the PRIMARY header for the current object
cube.header
# show the WCS for the current object
cube.wcs
Explanation: <a id="interaction"></a>
Interacting with Data Content
Regardless of how you load an object, the way we interact with all the tools, and the way they behave, is the same. They also all provide some quick access to basic metadata, like coordinates, header information, and WCS.
<a id="metadata"></a>
End of explanation
# look up the cube datmaodel
datamodel = cube.datamodel
datamodel
Explanation: <a id="datamodel"></a>
Object Datamodels
Every Marvin object comes with a built-in datamodel, which shows us what extensions are available, how they are names, and what they contain.
End of explanation
# see what datacubes are available
print('Datacubes:')
datamodel.datacubes
# see what additional spectral extensions are available
print('Spectra:')
datamodel.spectra
Explanation: The datamodel for a cube contains 3 datacube extensions and 2 spectral extensions. Let's look at them more closely.
End of explanation
print('Flux description:', datamodel.datacubes.flux.description)
print('Spectral Resolution description:', datamodel.spectra.spectral_resolution.description)
Explanation: This tells us that this cube has three associated 3D datacubes, flux, dispersion, and dispersion_prepixel, and two associated spectra, spectral_resolution and spectral_resolution_prepixel, as well as their associated units. The description attribute provides information on what the extension is.
End of explanation
flux = cube.flux
flux
Explanation: The names of each of the datacubes and spectral extensions are available as object attributes and can be used to access the associated data quantity. Let's access the flux for our current cube.
End of explanation
# create a small datacube subset in a 10x10 pixel region around the center of the target
central_pixel = int(cube.flux.shape[1]/2)
lo = central_pixel - 5
hi = central_pixel + 5
subset = flux[:, lo:hi, lo:hi]
subset
# extract a single spectrum and display it
spectrum = flux[:, 16, 17]
spectrum
spectrum.plot(show_std=True)
Explanation: The flux is represented as a 3D array with units. We can also access the inverse variance and the mask using flux.ivar and flux.mask, respectively. Marvin uses Astropy quantities to represent all internal multidimensional data, e.g. 3-d datacube and 2-d map arrays, or 1-d spectral data.
<a id="extract"></a>
Extracting data
All marvin arrays can be sliced to extract subsets of data. Slices of datacubes produce new datacubes. Slices of spectra produce new spectra. A single slice from a datacube extracts a spectrum. Each spectrum can be easily displayed with its plot attribute.
End of explanation
# access the Maps object from our cube
maps = cube.getMaps()
maps
Explanation: <a id="associates"></a>
Accessing Associated Objects
For a given target galaxy, there are many associated MaNGA Data Products from both the reduction and analysis pipelines. Marvin seamlessly connects these together so you don't have to access them individually. Let's access the DAP Maps objects associated with our cube. To do so we use the getMaps method on cube. All instances have getXXX methods that allow you to access associated data objects for a given target.
End of explanation
# look at the datamodel for the maps
maps.datamodel
Explanation: When accessing associated objects, Marvin also smartly determines whether to open the file locally or remotely grab it over the API. In our case, the mode is local and data_origin is file. DAP Maps have different bintypes. The default binning is HYB10, a hybrid binning scheme to S/N~10. To load a different bintype, use the bintype keyword argument in the cube.getMaps method.
A Maps behaves very similarly to a Cube and everything we have discussed above will still work. Instead of datacubes and spectra, a Maps object contains a set of 2D quantities called Map, each one of them representing a different property measured by the DAP. One can get a full list of all the properties available using the datamodel.
End of explanation
print('spx_skycoo_on_sky_x: ', maps.datamodel.spx_skycoo_on_sky_x.description)
Explanation: Note that some properties such as spx_skycoo have multiple channels (in this case the on-sky x and y coordinates). To access the individual property, simply combine the property name and the channel name with an underscore, i.e. spx_skycoo_on_sky_x. As before, we can get more information about a property using the description attribute.
End of explanation
# grab the map of H-alpha emission line flux
ha = maps.emline_gflux_ha_6564
ha
# plot the H-alpha map
fig, ax = ha.plot()
Explanation: More information about Marvin datamodels can be found here. See the DR15 Datamodel for a full description of the available datamodels for DR15.
We can retrieve the map associated to a specific property directly from the Maps instance. For example, let’s get the Hα emission line flux (fitted by a Gaussian). As Map quantities are similar to DataCube, we can easily plot the 2d map.
End of explanation
# load the same Maps object by itself
from marvin.tools import Maps
my_maps = Maps('8485-1901')
my_maps
# get the stellar velocity map
stvel = my_maps.stellar_velocity
# load an auxilliary tool, the gri color image of the target galaxy using the Marvin Image tool
# get the Image of the galaxy and plot it
image = cube.getImage()
image.plot()
Explanation: Note that the plot method returns the matplotlib Figure and Axes for the plot. We can use those to modify or save the plot. Marvin plotting routines try to select the best parameters, colour maps, and dynamic ranges. You can modify those by passing extra arguments to plot. You can learn more in the Map plotting section. We will talk about the Map class in detail in Working with Astropy Quantities and in Map.
Remember that all associated objects are connected together. They can be accessed either from within an object instance or loaded independently from the class itself. From within an instance, the core tools, Maps, RSS, and ModelCube are available using the getXXX methods, as well as additional auxillary tools and object data, such as Image.
End of explanation
# look at the overall cube quality flags
cube.quality_flag
Explanation: <a id="flags"></a>
Quality and Target Flags and Maskbits
Each MaNGA target has associated quality assurance flags as well targeting flags. Quality flags indiciate the overall quality of the data and/or any problems with the data discovered during the reduction and analysis pipeline runs.
End of explanation
# look at the targeting flags for this galaxy
cube.target_flags
Explanation: In this case the MANGA_DRP3QUAL maskbit does not have any bit activated, which means the data is safe to use. See the Maskbits section for more information on how to interact with Marvin Maskbits.
The targeting flags highlight the targeting bits set for this galaxy and tell you which sample the target belongs to.
End of explanation
# look at the NSA properties for this galaxy from the cube instance
nsa = cube.nsa
nsa
# look at the DAP all properties from the maps instance
maps.dapall
Explanation: Note that in this case the galaxy belongs to the secondary sample from the final target selection (SECONDARY_v1_2_0) as well as to the secondary samples from several commissioning target selections. The galaxy does not have any ancillary bit (manga_target3). An manga_target3 bit set indicates the target belongs to one of the MaNGA Ancillary programs.
<a id="mixin"></a>
NSA, DAPall, and Value-Added catalog data
For each target we can also access additional catalog data: the associated parameters from the NASA Sloan Atlas, and the DAPall file. The NSA catalog contains measured global galaxy photometric and shape profile information, while the DAPall catalog contains aggregrate statistics as measured by the MaNGA DAP pipeline. NSA properties are available on all Marvin objects under the nsa attribute. DAPall properties are available on all Marvin Maps and Modelcube objects.
End of explanation
# access the available VACs for this object
vacs = cube.vacs
vacs
Explanation: SDSS members produce Value-Added Catalogs (VACS) of MaNGA products. These products can be incorporated into Marvin. Any incorporated VACS in Marvin are contained in the vacs attribute attached to each tool. More information on VACs in Marvin can be found here. Currently the MaNGA-HI VAC for DR15 has been contributed into Marvin. Let's take a look at it.
End of explanation
# access the mangahi vac
hi = vacs.mangahi
print(hi)
print('data type:', type(hi))
# access any HI data available for 8485-1901
hi.data
Explanation: vacs is a container (list) of all names of the vacs contributed into Marvin. These names are dottable to access to vac data. These contributed custom products can be of any format chosen by the VAC owner, a Python class, a dictionary, an array, etc.
End of explanation
# Check our remote cube we loaded earlier
remote_cube
# download the remote cube
# remote_cube.download()
# loading the cube a second time loads it from the new file
new_cube = Cube('8485-1902')
new_cube
Explanation: <a id='download'></a>
Downloading files
While Marvin allows you to access data remotely, frequently you will find that you want to download the file associated to an object so that you can access it more quickly in the future. We can do that using the download method attached to all marvin core tools. Let’s try to load a cube that we know we do not have in out hard drive, and download it.
End of explanation
from marvin.utils.general import downloadList
# create a list of target ids
plateifus = ['8485-1901', '8485-1902', '7443-1901']
# download the DAP 2d MAPS files for these targets
# dllist = downloadList(plateifus, dltype='maps')
Explanation: Each tool contains a download method to download that individual file. If you want to download a list of targets in bulk, you can use the downloadList utility function. This function uses the built-in Python product sdss_access to download files from the Utah SAS. downloadList accepts as input a list of target ids, e.g. plateifus or mangaids. The dltype keyword indicates the type of file you wish to download. The default file is cube. Let's use the utility function to download a list of DAP MAPS files for three targets.
End of explanation
# extract the spaxel at array index x=10, y=15
spaxel = cube[15, 10]
spaxel
Explanation: <a id="spaxels"></a>
Interacting with Spaxels
All of the core Marvin Galaxy Tools have the ability to extract one ore more Spaxel objects. Spaxel objects are special Marvin objects to make it easier to deal with individual spaxels in galaxies and access their properties. See the Getting Started with Spaxels for more information on how to interact with Spaxels. We can either use the standard array slicing notation (0-indexed, origin of coordinates in the lower left corner of the array)
End of explanation
# get the spaxel at the center of the target galaxy
central_spaxel = cube.getSpaxel(x=0, y=0)
central_spaxel
# we can plot the spectrum of the spaxel
spaxel.flux.plot()
Explanation: or we can use getSpaxel method, which accepts multiple arguments. Note that by default, (x, y) coordinates passed to getSpaxel are measured from the center of the array.
End of explanation
<END_TASK> |
15,922 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The big reset
So I went ahead and cleared the memory.
Step1: The mystery section remains the same.
Step2: All the blocks are empty.
Step3: The 'PresetStyle' settings are empty, too.
Step4: Each of the registry settings are completely blank.
Interesting things to note
Step5: The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08.
Perhaps this has to do with some kind of internal wear levelling or something.
Registration extension
Now that the memory has been cleared, we can hopefully figure out more about the registration settings.
Recording Bank 3, Button 2 as the following settings
Step6: I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero.
This means that the first byte which has a value of 01 for all recorded settings can indeed be used as a flag... along with the FF byte at offset 24, and any other setting that cannot be set to a value of zero, such as the Pitch Bend range, Reverb type, Chorus type, and panel Sustain.
Personally, I think it makes more sense for the first byte to act as the recorded flag, so I think I'll use that. | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import collections
import mido
from commons import dgxdump
from commons.dumpdata import messages, songdata, regdata, regvalues
old_syx_messages = mido.read_syx_file('../data/syxout5.syx')
clear_syx_messages = mido.read_syx_file('../data/clear_bulk.txt')
o_dump = dgxdump.DgxDump(old_syx_messages)
c_dump = dgxdump.DgxDump(clear_syx_messages)
# songs slices
songslices = collections.OrderedDict([
('songs', slice(0x00, 0x01)),
('mystery', slice(0x01, 0x15D)),
('tracks', slice(0x15D, 0x167)),
('durations', slice(0x167, 0x17B)),
('trackdurations', slice(0x17B, 0x1F3)),
('presetstyle', slice(0x1F3, 0x22F)),
('beginningblocks', slice(0x22F, 0x24D)),
('nextblocks', slice(0x24D, 0x2CF)),
('startmarker', slice(0x2CF, 0x2D5)),
('blockdata', slice(0x2D5, 0x106D5)),
('endmarker', slice(0x106D5, None)),
])
EXPECTED_SIZE = 0x106DB
PRESETSTYLE = b'PresetStyle\0'*5
MARKER = b'PK0001'
def hex_string(data):
return " ".join("{:02X}".format(b) for b in data)
def bin_string(data):
return " ".join("{:08b}".format(b) for b in data)
def line_hex(data, head=None, tail=0):
if head is None:
head = len(data)
tailstart = len(data) - tail
if tailstart <= head:
return (hex_string(data))
else:
return ("{} .. {}".format(hex_string(data[:head]), hex_string(data[tailstart:])))
def song_section(dump, section):
return dump.song_data.data[songslices[section]]
for sec in songslices:
print(sec)
print(line_hex(song_section(o_dump, sec), 32, 4))
print(line_hex(song_section(c_dump, sec), 32, 4))
song_section(o_dump, 'mystery') == song_section(c_dump, 'mystery')
Explanation: The big reset
So I went ahead and cleared the memory.
End of explanation
all(b==0 for b in song_section(c_dump, 'nextblocks'))
all(b==0 for b in song_section(c_dump, 'blockdata'))
Explanation: The mystery section remains the same.
End of explanation
bytes(song_section(c_dump, 'presetstyle'))
Explanation: All the blocks are empty.
End of explanation
print(line_hex(o_dump.reg_data.data, 32, 4))
print(line_hex(c_dump.reg_data.data, 32, 4))
for bank in range(1, 8+1):
for button in range(1, 2+1):
print(bank, button)
print(line_hex(o_dump.reg_data.settings.get_setting(bank, button).data))
print(line_hex(c_dump.reg_data.settings.get_setting(bank, button).data))
Explanation: The 'PresetStyle' settings are empty, too.
End of explanation
for x in range(2, 7):
!diff -qs ../data/backup_experiment/cb1.txt ../data/backup_experiment/cb{x}.txt
!diff -qs ../data/backup_experiment/cb1.txt ../data/clear_bulk.txt
c2_syx_messages = mido.read_syx_file('../data/backup_experiment/cb1.txt')
c2_dump = dgxdump.DgxDump(c2_syx_messages)
c_dump.song_data.data == c2_dump.song_data.data
c_dump.reg_data.data == c2_dump.reg_data.data
for sec in songslices:
c_sec = song_section(c_dump, sec)
c2_sec = song_section(c2_dump, sec)
if c_sec != c2_sec:
print(sec)
print(line_hex(c_sec, 32, 4))
print(line_hex(c2_sec, 32, 4))
for n, (a, b) in enumerate(zip(c_dump.song_data.data, c2_dump.song_data.data)):
if a != b:
print("{0:02X}: {1:02X} {2:02X} ({1:03d} {2:03d})".format(n, a, b))
Explanation: Each of the registry settings are completely blank.
Interesting things to note: the first byte is 0 instead of 1, which probably indicates that the setting is unused.
The bytes that were FF in each recorded setting are 00 here.
Investigating FUNCTION backup
According to the manual (page 49), the following settings can be saved to backup, i.e. persistent memory for startup bu holding the FUNCTION button:
User songs (These are saved when recorded anyway)
Style files (the ones loaded using SmartMedia)
Touch response (ON/OFF)
Registration memory
These function settings:
Tuning
Split point
Touch sensitivity
Style volume
Song volume
Metronome volume
Grade
Demo cancel
Language
Media Select
Panel Sustain.
These backup settings are also cleared with the rest of the memory.
The default values for these settings are as follows:
| setting | default |
|-------------------|--------------|
| Touch response | ON |
| Tuning | 000 |
| Split point | 54 (F#2) |
| Touch sensitivity | 2 (Medium) |
| Style volume | 100 |
| Song volume | 100 |
| Metronome volume | 100 |
| Grade | ON |
| Demo cancel | OFF |
| Language | English |
| Media Select | Flash Memory |
| Panel sustain | OFF |
As an experiment, I changed the values of the function settings:
| setting | new value |
|-------------------|--------------|
| Touch response | ON |
| Tuning | 057 |
| Split point | 112 (E7) |
| Touch sensitivity | 3 (Hard) |
| Style volume | 045 |
| Song volume | 079 |
| Metronome volume | 121 |
| Grade | OFF |
| Demo cancel | ON |
| Language | Japanese |
| Media Select | Smart Media |
| Panel sustain | ON |
and without making a backup:
- took a bulk dump. (cb1.txt),
- then made the backup, took another bulk dump, (cb2.txt),
- restarted with the new settings, took another (cb3.txt),
- reset everything to default without backup (cb4.txt),
- made a backup again and took another dump (cb5.txt),
- then restarted again (cb6.txt).
All of these files were identical to each other, which suggests that these backup settings are not stored any part we can retrieve.
However, there is one thing interesting about these files, in that they differ from the dump I got immediately after resetting the memory (clear_bulk.txt).
End of explanation
r1_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/1reg.syx'))
c2_dump.song_data.data == r1_dump.song_data.data
c2_dump.reg_data.data == r1_dump.reg_data.data
for bank in range(1, 8+1):
for button in range(1, 2+1):
if not all(x == 0 for x in r1_dump.reg_data.settings.get_setting(bank, button).data):
print(bank, button)
line_hex(r1_dump.reg_data.settings.get_setting(3, 2).data)
for bb in [(3, 2), (1, 1)]:
sets = r1_dump.reg_data.settings.get_setting(*bb)
print(line_hex(sets.data))
sets.print_settings()
sets.print_unusual()
Explanation: The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08.
Perhaps this has to do with some kind of internal wear levelling or something.
Registration extension
Now that the memory has been cleared, we can hopefully figure out more about the registration settings.
Recording Bank 3, Button 2 as the following settings:
| setting | value |
|------------------|-------|
| Style | 092 |
| Accompaniment | ON |
| Split point | 053 |
| Main A/B | A |
| Style vol | 050 |
| Main voice | 060 |
| Main Octave | -1 |
| Main Volume | 054 |
| Main Pan | 092 |
| Main Reverb | 078 |
| Main Chorus | 103 |
| Split | ON |
| Split voice | 003 |
| Split Octave | 0 |
| Split Volume | 108 |
| Split Pan | 064 |
| Split Reverb | 032 |
| Split Chorus | 127 |
| Dual | OFF |
| Dual voice | 201 |
| Dual Octave | +2 |
| Dual Volume | 095 |
| Dual Pan | 048 |
| Dual Reverb | 017 |
| Dual Chorus | 082 |
| Pitch bend range | 05 |
| Reverb type | --(Room) |
| Chorus type | --(Celeste) |
| Harmony | OFF |
| Harmony type | 06(Trill1/4) |
| Harmony volume | 085/---* |
| Transpose | +03 |
| Tempo | 080 |
| Panel Sustain | ON |
*This was set using a different Harmony type setting.
End of explanation
r2_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/2reg.txt'))
sets = r2_dump.reg_data.settings.get_setting(2,2)
sets.print_settings()
sets.print_unusual()
Explanation: I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero.
This means that the first byte which has a value of 01 for all recorded settings can indeed be used as a flag... along with the FF byte at offset 24, and any other setting that cannot be set to a value of zero, such as the Pitch Bend range, Reverb type, Chorus type, and panel Sustain.
Personally, I think it makes more sense for the first byte to act as the recorded flag, so I think I'll use that.
End of explanation
<END_TASK> |
15,923 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
f=np.zeros((n,n), dtype=int) #creates integer matrix of size nxn
t=np.diag(np.diag(f)+(n-1)) #makes the diagonal n-1 and zeros elsewhere
return t
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
x=np.zeros((n,n), dtype=int) #creates integer matrix of size nxn
o=np.ones((n,n), dtype=int) #makes the diagonal 0 and ones elsewhere
return o+np.diag(np.diag(x)+-1)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
d=complete_deg(5) #calling function
a=complete_adj(5) #calling function
L=d-a #finding difference
np.linalg.eigvals(L) #eigenvalue answer
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation
<END_TASK> |
15,924 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
<END_TASK> |
15,925 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
Given a distance matrix, with similarity between various fruits : | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import sklearn.cluster
simM = load_data()
model = sklearn.cluster.AgglomerativeClustering(affinity='precomputed', n_clusters=2, linkage='complete').fit(simM)
cluster_labels = model.labels_
<END_TASK> |
15,926 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Vertex SDK
Step1: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported forAutoML. We recommend when possible, to choose the region closest to you.
Currently project resources must be in the us-central1 region to use this API.
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step5: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step10: AutoML constants
Setup up the following constants for AutoML
Step11: Clients
We use the Google APIs Client Library for Python to call the Vertex Training and Prediction API without manually constructing HTTP requests.
Step12: Prepare a trainer script
Package assembly
Step13: Task.py contents
Step14: Store training script on your Cloud Storage bucket
Step15: Train a model
projects.jobs.create
Request
Step16: Example output
Step17: Response
Step18: Example output
Step19: projects.jobs.get
Call
Step20: Response
Step21: Example output
Step22: Serving function for trained model (image data)
Step23: Example output
Step24: Example output
Step25: Example output
Step26: Response
Step27: Example output
Step28: projects.jobs.get
Call
Step29: Response
Step30: Example output
Step31: Example output
Step32: Example output
Step33: Response
Step34: Example output
Step35: projects.models.versions.create
Request
Step36: Example output
Step37: Response
Step38: Example output
Step39: Prepare input for online prediction
Step40: projects.predict
Request
Step41: Example output
Step42: Response
Step43: Example output
Step44: Response
Step45: Example output | <ASSISTANT_TASK:>
Python Code:
! pip3 install google-cloud-storage
Explanation: Vertex SDK: Train & deploy a TensorFlow model with hosted runtimes (aka pre-built containers)
Installation
Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported forAutoML. We recommend when possible, to choose the region closest to you.
Currently project resources must be in the us-central1 region to use this API.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AutoML, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import json
import time
from google.protobuf.json_format import MessageToJson
from google.protobuf.struct_pb2 import Value
from googleapiclient import discovery, errors
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoM location root path for dataset, model and endpoint resources.
End of explanation
cloudml = discovery.build("ml", "v1")
Explanation: Clients
We use the Google APIs Client Library for Python to call the Vertex Training and Prediction API without manually constructing HTTP requests.
End of explanation
! rm -rf cifar
! mkdir cifar
! touch cifar/README.md
setup_cfg = "[egg_info]\n\
tag_build =\n\
tag_date = 0"
! echo "$setup_cfg" > cifar/setup.cfg
setup_py = "import setuptools\n\
# Requires TensorFlow Datasets\n\
setuptools.setup(\n\
install_requires=[\n\
'tensorflow_datasets==1.3.0',\n\
],\n\
packages=setuptools.find_packages())"
! echo "$setup_py" > cifar/setup.py
pkg_info = "Metadata-Version: 1.0\n\
Name: Custom Training CIFAR-10\n\
Version: 0.0.0\n\
Summary: Demonstration training script\n\
Home-page: www.google.com\n\
Author: Google\n\
Author-email: aferlitsch@google.com\n\
License: Public\n\
Description: Demo\n\
Platform: Vertex AI"
! echo "$pkg_info" > cifar/PKG-INFO
! mkdir cifar/trainer
! touch cifar/trainer/__init__.py
Explanation: Prepare a trainer script
Package assembly
End of explanation
%%writefile cifar/trainer/task.py
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default='/tmp/saved_model', type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
NUM_WORKERS = strategy.num_replicas_in_sync
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
End of explanation
! rm -f cifar.tar cifar.tar.gz
! tar cvf cifar.tar cifar
! gzip cifar.tar
! gsutil cp cifar.tar.gz gs://$BUCKET_NAME/trainer_cifar.tar.gz
Explanation: Store training script on your Cloud Storage bucket
End of explanation
JOB_NAME = "custom_job_TF_" + TIMESTAMP
TRAINING_INPUTS = {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"masterConfig": {"acceleratorConfig": {"count": "1", "type": "NVIDIA_TESLA_K80"}},
"packageUris": ["gs://" + BUCKET_NAME + "/trainer_cifar.tar.gz"],
"pythonModule": "trainer.task",
"args": [
"--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME),
"--epochs=" + str(20),
"--steps=" + str(100),
"--distribute=" + "single",
],
"region": REGION,
"runtimeVersion": "2.1",
"pythonVersion": "3.7",
}
body = {"jobId": JOB_NAME, "trainingInput": TRAINING_INPUTS}
request = cloudml.projects().jobs().create(parent=PARENT)
request.body = json.loads(json.dumps(body, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().jobs().create(parent=PARENT, body=body)
Explanation: Train a model
projects.jobs.create
Request
End of explanation
response = request.execute()
Explanation: Example output:
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/jobs?alt=json",
"method": "POST",
"body": {
"jobId": "custom_job_TF_20210325211532",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
},
"packageUris": [
"gs://migration-ucaip-trainingaip-20210325211532/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"--epochs=20",
"--steps=100",
"--distribute=single"
],
"region": "us-central1",
"runtimeVersion": "2.1",
"pythonVersion": "3.7"
}
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.jobs.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
# The full unique ID for the custom training job
custom_training_id = f'{PARENT}/jobs/{response["jobId"]}'
# The short numeric ID for the custom training job
custom_training_short_id = response["jobId"]
print(custom_training_id)
Explanation: Example output:
{
"jobId": "custom_job_TF_20210325211532",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210325211532/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"--epochs=20",
"--steps=100",
"--distribute=single"
],
"region": "us-central1",
"runtimeVersion": "2.1",
"pythonVersion": "3.7",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
}
},
"createTime": "2021-03-25T21:15:40Z",
"state": "QUEUED",
"trainingOutput": {},
"etag": "dH4whflp8Fg="
}
End of explanation
request = cloudml.projects().jobs().get(name=custom_training_id)
response = request.execute()
Explanation: projects.jobs.get
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
while True:
response = cloudml.projects().jobs().get(name=custom_training_id).execute()
if response["state"] != "SUCCEEDED":
print("Training job has not completed:", response["state"])
if response["state"] == "FAILED":
break
else:
break
time.sleep(20)
# model artifact output directory on Google Cloud Storage
model_artifact_dir = response["trainingInput"]["args"][0].split("=")[-1]
print("artifact location " + model_artifact_dir)
Explanation: Example output:
{
"jobId": "custom_job_TF_20210325211532",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210325211532/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"--epochs=20",
"--steps=100",
"--distribute=single"
],
"region": "us-central1",
"runtimeVersion": "2.1",
"pythonVersion": "3.7",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
}
},
"createTime": "2021-03-25T21:15:40Z",
"state": "PREPARING",
"trainingOutput": {},
"etag": "eLnYfClHtKU="
}
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(model_artifact_dir)
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
@tf.function(
input_signature=[tf.TensorSpec([None], tf.string), tf.TensorSpec([None], tf.string)]
)
def serving_fn(bytes_inputs, key):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return {"prediction": prob, "key": key}
tf.saved_model.save(
model,
model_artifact_dir,
signatures={
"serving_default": serving_fn,
},
)
loaded = tf.saved_model.load(model_artifact_dir)
tensors_specs = list(loaded.signatures["serving_default"].structured_input_signature)
print("Tensors specs:", tensors_specs)
input_name = [v for k, v in tensors_specs[1].items() if k != "key"][0].name
print("Bytes input tensor name:", input_name)
Explanation: Serving function for trained model (image data)
End of explanation
import base64
import json
import cv2
import numpy as np
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image_1, test_label_1 = x_test[0], y_test[0]
test_image_2, test_label_2 = x_test[1], y_test[1]
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.json"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for img in ["tmp1.jpg", "tmp2.jpg"]:
bytes = tf.io.read_file(img)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
f.write(json.dumps({"key": img, input_name: {"b64": b64str}}) + "\n")
! gsutil cat $gcs_input_uri
Explanation: Example output:
Tensors specs: [(), {'bytes_inputs': TensorSpec(shape=(None,), dtype=tf.string, name='bytes_inputs'), 'key': TensorSpec(shape=(None,), dtype=tf.string, name='key')}]
Bytes input tensor name: bytes_inputs
Make batch predictions
Prepare files for batch prediction
End of explanation
body = {
"jobId": "custom_job_TF_pred_" + TIMESTAMP,
"prediction_input": {
"input_paths": gcs_input_uri,
"output_path": "gs://" + f"{BUCKET_NAME}/batch_output/",
"data_format": "JSON",
"runtime_version": "2.1",
"uri": model_artifact_dir,
"region": "us-central1",
},
}
request = (
cloudml.projects()
.jobs()
.create(
parent=PARENT,
)
)
request.body = json.loads(json.dumps(body, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().jobs().create(parent=PARENT, body=body)
Explanation: Example output:
{"key": "tmp1.jpg", "bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z"}}
{"key": "tmp2.jpg", "bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8V5UubRQlxlSvDAtyD6dadbW91fK8lrFI6o6KzrnALHCj8cH8jX3J+1V+wR8adOjsrDxR8EPhzohsg13qKfD+zddRWBF2u0sR42AnIzjJAAzzXmnwh/Yk+D3jX4Q6h478cftgaX4Al/tR4f8AhHdf0eRruVI+IpdkbFiWLsAqgnrXZLBVFWcI6/gc0MVSlSU2eZaX+zdr954Nv/EEt7FNeWyrJHZ2moRn93tYsTuwcg7OBz19q8sa7AUEMf8AvqvoHwX+yz8Vb74gXtn4M+Euq/EbSYpV+y6vf2txptrMOAz+XIysR0xu9M4qf9pn9mf4jJoNprJ+BGgeCn0mHZfQ2OqRl793fAZUDkkAbcd8k1pUw1OUE6e/bf8AEVOs1JqT3P19/aT/AOCMf7RH7Qfx5134zeNf2z7S18Q+PkSWWDSb6406BrSMFYrWNCCAsakDbnOSSeTXg+sf8G3viHwt49ez1jxdY6zqds1veTwT+MzBdqJWnWCYb0DhXe3n2sOGMD4J2HH7IfD3xnc/EPwl4Y8R6t458M28y+EL1NRh1nS3vGXV3a1+w3S4mjCwxxpdCaFSjTNLGRImwk+A6f8AAL9oH4gaX4+tf+Ckn7Vfw4+I2k3fiW6m+HOneFNPn0WDw9piTLLbuUiYGWZsCNYp/tMtqiSbL+b7RMrqvWxVDKamZ89BOg03Q9+deupOpBRotU1CM4OMak/aSUIxkouTbUjmllc0qic60XrGNldX/dtNr/n2+aS5r3XI3ytKz+Jof+CN2r6LYHU/ibqOo2iQzFmmn8eXLfugMbDhwMcdeprg/iV+zX+zx8O9Mu9f8NaRplw9oSr6g0sl0BgdBNMzZ+i9K+svi9P+yv8ADAnRfhl4MfxNdhSDe63fzS2sJHdYpHbfjtu/KvhL9ub4tarruhy2JvJMsdjJFGFj28gKqrgKo9B6VhlvEGMzfDxm8M6N+kpRlJeT5dE/mwoZDiMO+evVb8j/2Q=="}}
projects.jobs.create
Request
End of explanation
response = request.execute()
Explanation: Example output:
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/jobs?alt=json",
"method": "POST",
"body": {
"jobId": "custom_job_TF_pred_20210325211532",
"prediction_input": {
"input_paths": "gs://migration-ucaip-trainingaip-20210325211532/test.json",
"output_path": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/",
"data_format": "JSON",
"runtime_version": "2.1",
"uri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"region": "us-central1"
}
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.jobs.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
# The full unique ID for the batch prediction job
batch_job_id = PARENT + "/jobs/" + response["jobId"]
print(batch_job_id)
Explanation: Example output:
{
"jobId": "custom_job_TF_pred_20210325211532",
"predictionInput": {
"dataFormat": "JSON",
"inputPaths": [
"gs://migration-ucaip-trainingaip-20210325211532/test.json"
],
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/",
"region": "us-central1",
"runtimeVersion": "2.1",
"uri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"framework": "TENSORFLOW"
},
"createTime": "2021-03-25T21:34:56Z",
"state": "QUEUED",
"predictionOutput": {
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/"
},
"etag": "QwNOFOfoKdY="
}
End of explanation
request = cloudml.projects().jobs().get(name=batch_job_id)
response = request.execute()
Explanation: projects.jobs.get
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
while True:
response = request = cloudml.projects().jobs().get(name=batch_job_id).execute()
if response["state"] != "SUCCEEDED":
print("The job has not completed:", response["state"])
if response["state"] == "FAILED":
break
else:
folder = response["predictionInput"]["outputPath"][:-1]
! gsutil ls $folder/prediction*
! gsutil cat $folder/prediction*
break
time.sleep(60)
Explanation: Example output:
{
"jobId": "custom_job_TF_pred_20210325211532",
"predictionInput": {
"dataFormat": "JSON",
"inputPaths": [
"gs://migration-ucaip-trainingaip-20210325211532/test.json"
],
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/",
"region": "us-central1",
"runtimeVersion": "2.1",
"uri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"framework": "TENSORFLOW"
},
"createTime": "2021-03-25T21:34:56Z",
"state": "QUEUED",
"predictionOutput": {
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/"
},
"etag": "NSbtn4XnbbU="
}
End of explanation
request = cloudml.projects().models().create(parent=PARENT)
request.body = json.loads(json.dumps({"name": "custom_job_TF_" + TIMESTAMP}, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = (
cloudml.projects()
.models()
.create(parent=PARENT, body={"name": "custom_job_TF_" + TIMESTAMP})
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210325211532/batch_output/prediction.errors_stats-00000-of-00001
gs://migration-ucaip-trainingaip-20210325211532/batch_output/prediction.results-00000-of-00001
{"prediction": [0.033321816474199295, 0.052459586411714554, 0.1548144668340683, 0.11401787400245667, 0.17382358014583588, 0.09015274047851562, 0.19865882396697998, 0.10446511209011078, 0.029874442145228386, 0.048411525785923004], "key": "tmp1.jpg"}
{"prediction": [0.03346974775195122, 0.05255022272467613, 0.15449963510036469, 0.11388237029314041, 0.17408262193202972, 0.08989296853542328, 0.19814379513263702, 0.10520868003368378, 0.02989153563976288, 0.04837837815284729], "key": "tmp2.jpg"}
Make online predictions
Deploy the model
projects.models.create
Request
End of explanation
response = request.execute()
Explanation: Example output:
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models?alt=json",
"method": "POST",
"body": {
"name": "custom_job_TF_20210325211532"
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.models.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
model_id = response["name"]
# The short numeric ID for the training pipeline
model_short_name = model_id.split("/")[-1]
print(model_id)
Explanation: Example output:
{
"name": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532",
"regions": [
"us-central1"
],
"etag": "fFH1QQbH3tA="
}
End of explanation
version = {
"name": "custom_job_TF_" + TIMESTAMP,
"deploymentUri": model_artifact_dir,
"runtimeVersion": "2.1",
"framework": "TENSORFLOW",
"pythonVersion": "3.7",
"machineType": "mls1-c1-m2",
}
request = (
cloudml.projects()
.models()
.versions()
.create(
parent=model_id,
)
)
request.body = json.loads(json.dumps(version, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().models().versions().create(parent=model_id, body=version)
Explanation: projects.models.versions.create
Request
End of explanation
response = request.execute()
Explanation: Example output:
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions?alt=json",
"method": "POST",
"body": {
"name": "custom_job_TF_20210325211532",
"deploymentUri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"runtimeVersion": "2.1",
"framework": "TENSORFLOW",
"pythonVersion": "3.7",
"machineType": "mls1-c1-m2"
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.models.versions.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
# The full unique ID for the model version
model_version_name = response["metadata"]["version"]["name"]
print(model_version_name)
while True:
response = (
cloudml.projects().models().versions().get(name=model_version_name).execute()
)
if response["state"] == "READY":
print("Model version created.")
break
time.sleep(60)
Explanation: Example output:
{
"name": "projects/migration-ucaip-training/operations/create_custom_job_TF_20210325211532_custom_job_TF_20210325211532-1616708521927",
"metadata": {
"@type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2021-03-25T21:42:02Z",
"operationType": "CREATE_VERSION",
"modelName": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532",
"version": {
"name": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions/custom_job_TF_20210325211532",
"deploymentUri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"createTime": "2021-03-25T21:42:01Z",
"runtimeVersion": "2.1",
"etag": "3vf44xGDtdw=",
"framework": "TENSORFLOW",
"machineType": "mls1-c1-m2",
"pythonVersion": "3.7"
}
}
}
End of explanation
import base64
import json
import cv2
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image_1, test_label_1 = x_test[0], y_test[0]
test_image_2, test_label_2 = x_test[1], y_test[1]
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
Explanation: Prepare input for online prediction
End of explanation
instances_list = []
for img in ["tmp1.jpg", "tmp2.jpg"]:
bytes = tf.io.read_file(img)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
instances_list.append({"key": img, input_name: {"b64": b64str}})
request = cloudml.projects().predict(name=model_version_name)
request.body = json.loads(json.dumps({"instances": instances_list}, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().predict(
name=model_version_name, body={"instances": instances_list}
)
Explanation: projects.predict
Request
End of explanation
response = request.execute()
Explanation: Example output:
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions/custom_job_TF_20210325211532:predict?alt=json",
"method": "POST",
"body": {
"instances": [
{
"key": "tmp1.jpg",
"bytes_inputs": {
"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z"
}
},
{
"key": "tmp2.jpg",
"bytes_inputs": {
"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8V5UubRQlxlSvDAtyD6dadbW91fK8lrFI6o6KzrnALHCj8cH8jX3J+1V+wR8adOjsrDxR8EPhzohsg13qKfD+zddRWBF2u0sR42AnIzjJAAzzXmnwh/Yk+D3jX4Q6h478cftgaX4Al/tR4f8AhHdf0eRruVI+IpdkbFiWLsAqgnrXZLBVFWcI6/gc0MVSlSU2eZaX+zdr954Nv/EEt7FNeWyrJHZ2moRn93tYsTuwcg7OBz19q8sa7AUEMf8AvqvoHwX+yz8Vb74gXtn4M+Euq/EbSYpV+y6vf2txptrMOAz+XIysR0xu9M4qf9pn9mf4jJoNprJ+BGgeCn0mHZfQ2OqRl793fAZUDkkAbcd8k1pUw1OUE6e/bf8AEVOs1JqT3P19/aT/AOCMf7RH7Qfx5134zeNf2z7S18Q+PkSWWDSb6406BrSMFYrWNCCAsakDbnOSSeTXg+sf8G3viHwt49ez1jxdY6zqds1veTwT+MzBdqJWnWCYb0DhXe3n2sOGMD4J2HH7IfD3xnc/EPwl4Y8R6t458M28y+EL1NRh1nS3vGXV3a1+w3S4mjCwxxpdCaFSjTNLGRImwk+A6f8AAL9oH4gaX4+tf+Ckn7Vfw4+I2k3fiW6m+HOneFNPn0WDw9piTLLbuUiYGWZsCNYp/tMtqiSbL+b7RMrqvWxVDKamZ89BOg03Q9+deupOpBRotU1CM4OMak/aSUIxkouTbUjmllc0qic60XrGNldX/dtNr/n2+aS5r3XI3ytKz+Jof+CN2r6LYHU/ibqOo2iQzFmmn8eXLfugMbDhwMcdeprg/iV+zX+zx8O9Mu9f8NaRplw9oSr6g0sl0BgdBNMzZ+i9K+svi9P+yv8ADAnRfhl4MfxNdhSDe63fzS2sJHdYpHbfjtu/KvhL9ub4tarruhy2JvJMsdjJFGFj28gKqrgKo9B6VhlvEGMzfDxm8M6N+kpRlJeT5dE/mwoZDiMO+evVb8j/2Q=="
}
}
]
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.predict",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
request = cloudml.projects().models().versions().delete(name=model_version_name)
response = request.execute()
Explanation: Example output:
{
"predictions": [
{
"key": "tmp1.jpg",
"prediction": [
0.033321816474199295,
0.052459586411714554,
0.1548144668340683,
0.11401788890361786,
0.17382356524467468,
0.09015275537967682,
0.19865882396697998,
0.10446509718894958,
0.02987445704638958,
0.048411525785923004
]
},
{
"key": "tmp2.jpg",
"prediction": [
0.03346974775195122,
0.052550218999385834,
0.15449965000152588,
0.11388237029314041,
0.17408263683319092,
0.08989296108484268,
0.19814379513263702,
0.10520866513252258,
0.02989153563976288,
0.04837837815284729
]
}
]
}
projects.models.versions.delete
Call
End of explanation
print(json.dumps(response, indent=2))
Explanation: Response
End of explanation
delete_model = True
delete_bucket = True
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
cloudml.projects().models().delete(name=model_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
{
"name": "projects/migration-ucaip-training/operations/delete_custom_job_TF_20210325211532_custom_job_TF_20210325211532-1616708584436",
"metadata": {
"@type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2021-03-25T21:43:04Z",
"operationType": "DELETE_VERSION",
"modelName": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532",
"version": {
"name": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions/custom_job_TF_20210325211532",
"deploymentUri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"createTime": "2021-03-25T21:42:01Z",
"runtimeVersion": "2.1",
"state": "READY",
"etag": "Nu2QJaCl6vw=",
"framework": "TENSORFLOW",
"machineType": "mls1-c1-m2",
"pythonVersion": "3.7"
}
}
}
Cleanup
End of explanation
<END_TASK> |
15,927 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<p>
<img src="http
Step1: Stirling array $\mathcal{S}$
Step2: power function
Step3: inverse function
Step4: sqrt function
Step5: expt function
Step6: log function
Step7: sin function
Step8: cos function | <ASSISTANT_TASK:>
Python Code:
from sympy import *
from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a, alpha
from commons import *
from matrix_functions import *
from sequences import *
import functions_catalog
init_printing()
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
<small>
<br>November 20, 2018: cleanup, `sin` and `cos` $g$ polys
<br>November 16 and 18, 2016: classic $g$ polys
</small>
</div>
</p>
<br>
<br>
<div align="center">
<b>Abstract</b><br>
Theory of matrix functions, with applications to Stirling array $\mathcal{S}$.
</div>
End of explanation
from sympy.functions.combinatorial.numbers import stirling
m=8
S2 = define(let=Symbol(r'\mathcal{{S}}_{{ {} }}'.format(m)),
be=Matrix(m, m, lambda n,k: stirling(n,k, kind=2)))
S2 # this version is the pure one
S2 = define(let=Symbol(r'\mathcal{{S}}_{{ {} }}'.format(m)),
be=Matrix(m, m, riordan_matrix_exponential(
riordan_matrix_by_convolution(d=Eq(Function('d')(z), exp(z)),
h=Eq(Function('h')(z), exp(z)-1),
dim=m))))
S2
inspect(S2.rhs)
production_matrix(S2.rhs)
eigendata = spectrum(S2)
eigendata
data, eigenvals, multiplicities = eigendata.rhs
Phi_poly = Phi_poly_ctor(deg=m-1)
Phi_poly
Phi_polynomials = component_polynomials(eigendata, early_eigenvals_subs=True)
Phi_polynomials
cmatrices = component_matrices(S2, Phi_polynomials)
cmatrices
Explanation: Stirling array $\mathcal{S}$
End of explanation
f_power, g_power, G_power = functions_catalog.power(eigendata, Phi_polynomials)
S2_power = G_power(S2)
S2_power
define(S2_power.lhs, S2_power.rhs.applyfunc(factor)) # factored
S2_power.rhs[:,0]
assert (S2.rhs**r).applyfunc(simplify) == S2_power.rhs
inspect(S2_power.rhs)
production_matrix(S2_power.rhs).applyfunc(factor)
Explanation: power function
End of explanation
f_inverse, g_inverse, G_inverse = functions_catalog.inverse(eigendata, Phi_polynomials)
S2_inverse = G_inverse(S2)
S2_inverse, G_inverse(S2_inverse)
inspect(S2_inverse.rhs)
production_matrix(S2_inverse.rhs)
assert S2_inverse.rhs*S2.rhs == Matrix(m, m, identity_matrix())
assert S2_inverse.rhs == S2_power.rhs.subs({r:-1})
Explanation: inverse function
End of explanation
f_sqrt, g_sqrt, G_sqrt = functions_catalog.square_root(eigendata, Phi_polynomials)
S2_sqrt = G_sqrt(S2)
S2_sqrt
inspect(S2_sqrt.rhs)
production_matrix(S2_sqrt.rhs)
inspect(S2_sqrt.rhs)
production_matrix(S2_sqrt.rhs, exp=False)
assert S2_sqrt.rhs == S2.rhs**(S(1)/2)
assert S2_sqrt.rhs == S2_power.rhs.subs({r:S(1)/2})
Explanation: sqrt function
End of explanation
f_exp, g_exp, G_exp = functions_catalog.exp(eigendata, Phi_polynomials)
S2_exp = G_exp(S2)
S2_exp
define(S2_exp.lhs, S2_exp.rhs.applyfunc(factor))
S2_exp1 = define(let=Subs(S2_exp.lhs, alpha, 1), be=S2_exp.rhs.subs({alpha:1}))
S2_exp1
inspect(S2_exp.rhs)
inspect(S2_exp1.rhs)
Explanation: expt function
End of explanation
f_log, g_log, G_log = functions_catalog.log(eigendata, Phi_polynomials)
S2_log = G_log(S2)
S2_log
inspect(S2_log.rhs[1:,:-1])
production_matrix(S2_log.rhs[1:,:-1])
Explanation: log function
End of explanation
f_sin, g_sin, G_sin = functions_catalog.sin(eigendata, Phi_polynomials)
S2_sin = G_sin(S2)
S2_sin
Explanation: sin function
End of explanation
f_cos, g_cos, G_cos = functions_catalog.cos(eigendata, Phi_polynomials)
S2_cos = G_cos(S2)
S2_cos
assert (S2_sin.rhs**2 + S2_cos.rhs**2).applyfunc(trigsimp) == Matrix(m, m, identity_matrix())
Explanation: cos function
End of explanation
<END_TASK> |
15,928 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sets & Frozen Sets
A set in python is an unordered sequence of unique and immutable objects, although the set itself is mutable. A frozen set is the same as a set except that the set itself is immutable. Since the set is a collection of unordered elements, indexing is not applicable.
Constructing sets
Step1: Adding, Updating, and Removing elements from a set
Step2: Set Membership & Length
Step3: Set Intersection, Disjoint, Union, and Difference
Step4: Subsets and supersets
A.issubset(B) returns True if A is a subset of B. A <= B is another way of writing it.<br>
A.issuperset(B) returns True of A is a superset of B. A >= is another way of writing it.
"<" is used to check if a set is a proper subset of another set. A proper subset B of the set A means that all the elements of B are in A but A has at least one more element that is not in B. Formally
Step5: Symmetric difference
The symmetric difference of two sets is the set of elements which are in one of either set, but not in both.
Step6: Printing, Copying, & Clearing a set | <ASSISTANT_TASK:>
Python Code:
# from a list
a = set([1,2,3,4])
a
# using curly braces
a = {1,2,3,4}
a
# using a tuple
a = set((1,2,3,4))
a
# start with and empty set and add elements to it
a = set()
a.add('hello')
a
# be careful in assigning a string as an element to a set. If assigned as below, it'll be broken up and only unique
# characters are added to the set
a = set('this is a string')
a
# this will do the same thing
a = set(('string'))
# if you want to assign a string as an element then do
a = set()
a.add('string')
a
# or
a = set(('string',))
a
# or
a = set(['string'])
a
# or
a = {'string'}
a
# this assignment will not work since we're one element of the set is a list which is a mutable object.
# A set's elements should be immutable
a = {"a", (1,2), [3,4]}
# uniquness of the elements of a set
a = {1,2,3,3,4,4,7,8,-1,3,4} # we contains only one copy of a repeated element
a
Explanation: Sets & Frozen Sets
A set in python is an unordered sequence of unique and immutable objects, although the set itself is mutable. A frozen set is the same as a set except that the set itself is immutable. Since the set is a collection of unordered elements, indexing is not applicable.
Constructing sets
End of explanation
A = {1,2,3,4}
A.add('string')
A
# update a set
A.update([26, 12, 9, 14])
print A
# A.discard(x) will remove x from the set. If x is not in the set, nothing happens.
A = {'a', 'b', 'c', 'd', 'e'}
A.discard('b')
print "A after discarding the element 'b'", A
A.discard('f')
print "A after discarding the element 'f' (which does not belong to A)", A
# A.remove(x) works like A.discard(x) but if x is not in the set A, an KeyError exception is thrown
A = {'a', 'b', 'c', 'd', 'e'}
A.remove('b')
print "A after A.remove('b')", A
A.remove('f') # a KeyError is thrown
print "A after A.remove('f') ('f' does not belong to A)", A
Explanation: Adding, Updating, and Removing elements from a set
End of explanation
A = {'1', '2', '3', 1, 2, 3}
print "Is '1' in A?", '1' in A
print "Is '4' not in A?", '4' not in A
print "Is 5 in A", 5 in A
print "A is %s elements long"%len(A)
Explanation: Set Membership & Length
End of explanation
# A.intersection(B) or A&B retrurns a set with the common elements between A and B.
A = {"a","b","c","d","e"}
B = {"c","d","e","f","g"}
print "The intersection of A and B is ", A.intersection(B)
print "The intersection of A and B using A&B notation is ", A&B
# get the common elements of more than 2 sets
A = set([1,2,3,4,5])
B = set([4,5,6,7,8])
C = set([4,5,1,8,9])
print "First way to get the intersection of 3 sets ", set.intersection(A, B, C)
print "Second wat to get the intersection of 3 sets ", A&B&C
# A.issdisjoint(B) returns True if the two sets have a null intersection, otherwise it returns False
A = {'a', 'b', 'c', 'd', 'e', 'f'}
B = {'c', 'd', 'e'}
C = {'g', 'h', 'i', 'j'}
print "A and B have no common elements? ", A.isdisjoint(B)
print "A and C have no common elements? ", A.isdisjoint(C)
# A.union(B) or A | B returns a set of the merged elements of A and B
# the same element in both sets is counted once of course
A = set([1,2,3])
B = set([2,3,4])
print "The merging of A and B is ", A.union(B)
print "The merging of A and B using another notation is ", A | B
# A.difference(B) or A-B returns the a set of the difference between A and B.
# This can be chanied to include many sets
A = {'a', 'b', 'c', 'd', 'e'}
B = {'c', 'd', 'g', 'h'}
C = {'d', 'b', 'f'}
print "The elements that are in A but not in B", A.difference(B)
print "The elemets that are in A but not in B or C", A.difference(B).difference(C)
print "The elements that are in A but not in B", A-B
print "The elemets that are in A but not in B or C", A-B-C
# A.difference_update(B) or A = A - B find the difference between A and B and assigns it back to A.
# In other words, it removes the elements from A that are in B and assigns it to A.
A = {'a', 'b', 'c', 'd', 'e'}
B = {'c', 'd', 'g', 'h'}
A.difference_update(B)
print "A after A.difference_update(B)", A
A = {'a', 'b', 'c', 'd', 'e'}
B = {'c', 'd', 'g', 'h'}
A = A - B
print "A after A=A-B", A
Explanation: Set Intersection, Disjoint, Union, and Difference
End of explanation
A = {'a', 'b', 'c', 'd', 'e', 'f'}
B = {'c', 'd', 'e'}
print "Is A a subset of B? ", A.issubset(B)
print "Is A a superset of B? ", A.issuperset(B)
print "Is B a subset of A? ", B.issubset(A)
print "Is B a superset of A? ", B.issuperset(A)
# using the other notation
print "Is A a subset of B? ", A <= B
print "Is A a superset of B? ", A >= B
print "Is B a subset of A? ", B <= A
print "Is B a superset of A? ", B >= A
print "Is B a superset of B? ", B >= B
print "Is B a subset of B? ", B <= B
print "Is A a proper subset of B? ", A < B
print "Is B a proper subset of A? ", B < A
print "Is A a proper superset of B? ", A > B
print "Is B a proper superset of A? ", B > A
print "Is B a proper subset of B? ", B < B
print "Is B a proper superset of B? ", B > B
# pop() removes and returns an arbitrary set element. The method raises a KeyError if the set is empty
x = {"a","b","c","d","e"}
x.pop()
x.pop()
Explanation: Subsets and supersets
A.issubset(B) returns True if A is a subset of B. A <= B is another way of writing it.<br>
A.issuperset(B) returns True of A is a superset of B. A >= is another way of writing it.
"<" is used to check if a set is a proper subset of another set. A proper subset B of the set A means that all the elements of B are in A but A has at least one more element that is not in B. Formally: a proper subset of a set A is a subset of A that is not equal to A.
">" is used to check if a set is a proper superset of another set. A set A that is a proper superset of B means that all elements of B are in A but A contains at least one element that is not in B. Formally: A proper superset of a set A is a superset of A that is not equal to A.
End of explanation
A = set([1, 2, 3])
B = set([3, 4, 5])
print "Symmetric diffrence between A and B ", A.symmetric_difference(B)
print "Symmetric difference between A and B using A ^ B is ", A ^ B
A.symmetric_difference_update(B) # this assigns the results of symmetric_difference to A
print "After symmetric_difference_update A is ", A
Explanation: Symmetric difference
The symmetric difference of two sets is the set of elements which are in one of either set, but not in both.
End of explanation
# print the elements of the set
A = set("James")
for n in A:
print n,
# print the elements of A on the same line
A = {'string', 1,2,3,4,5,3,'a',(1,2,3)}
for n in A:
print n,
# print the elements of A each on it's own line
for n in A:
print n
# A.clear() clear the elemets of A
A = {'string', 1,2,3,4,5,3,'a',(1,2,3)}
print "A before A.clear()", A
A.clear()
print "A after A.clear()", A
# A.copy() copies the contents of A into another set B so clearing A wont affect the contents of B
A = {1,2,3,3,4,4,7,8,-1,3,4} # we contains only one copy of a repeated element
B = A.copy()
print "A is ", A
print "B after B=A.copy() is", B
A.clear()
print "A after A.clear()", A
print "B after A.clear()", B
# B = A makes a "pointer" from A to B so modifying A will also modify B
A = {1,2,3,3,4,4,7,8,-1,3,4} # we contains only one copy of a repeated element
B = A
print "A is ", A
print "B after B=A.copy() is", B
A.clear()
print "A after A.clear()", A
print "B after A.clear()", B
Explanation: Printing, Copying, & Clearing a set
End of explanation
<END_TASK> |
15,929 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Principal Component Analysis
by Rene Zhang and Max Margenot
Part of the Quantopian Lecture Series
Step1: We will introduce PCA with an image processing example. A grayscale digital image can be represented by a matrix, whose $(i,j)^{th}$ entry corresponds to the measurement of gray
scale at the $(i,j)^{th}$ pixel. The following gray-scale image has $200 \times 200$ pixels, though it can be changed on the fly. We store it in a matrix $\mathbf{X}$. The number of rows of the $\mathbf{X}$ is $200$, and the number of columns of $\mathbf{X}$ is $200$.
Step2: We start with a simple checkboard pattern, add some random normal noise, and add a gradient.
Step3: Set each row as a variable, with observations in the columns. Denote the covariance matrix of $\mathbf{X}$ as $\mathbf{C}$, where the size of $\mathbf{C}$ is $m \times m$. $\mathbf{C}$ is a matrix whose $(i,j)^{th}$ entry is the covariance between the $i^{th}$ row and $j^{th}$ row of the matrix $\mathbf{X}$.
Step4: Performing principal component analysis decomposes the matrix $\mathbf{C}$ into
Step5: The function LA.eigh lists the eigenvalues from small to large in $P$. Let us change the order first to list them from largest to smallest and make sure that $\mathbf{L}\mathbf{P}\mathbf{L}^{\top}==\mathbf{C}$.
Step6: Here we plot all of the eigenvalues
Step7: The $i^{th}$ principal component is given as $i^{th}$ row of $\mathbf{V}$,
$$\mathbf{V} =\mathbf{L}^{\top} \mathbf{X}.$$
Step8: If we multiply both sides on the left by $\mathbf{L}$, we get the following
Step9: The proportion of total variance due to the $i^{th}$ principal component is given by the ratio $\frac{\lambda_i}{\lambda_1 + \lambda_2 + \dots \lambda_m}.$ The sum of proportion of total variance should be $1$. As we defined, $\lambda_i$ is $i^{th}$ entry of $\mathbf{P}$,
$$\sum_{i}\frac{P_i}{\text{trace}(P)} = 1$$
Where the trace$(P)$ is the sum of the diagonal of $P$.
Step10: Recall the number of principal components is denoted as $k$. Let $k$ be $10, 20, 30, 60$ as examples and take a look at the corresponding approximated images.
Step11: The number of variables in $X$ is $200$. When reducing the dimension to $k=60$, which uses half of the principal components, the approximated image is close to the original one.
Moving forward, we do not have to do PCA by hand. Luckly, scikit-learn has an implementation that we can use. Next, let us show an example in quantitative finance using sklearn.
PCA on a Portfolio
Construct a portfolio with 10 stocks, IBM, MSFT, FB, T, INTC, ABX, NEM, AU, AEM, GFI. 5 of them are technology related and 5 of them are gold mining companies.
In this case, there are 10 variables (companies), and each column is a variable.
Step12: Notice that the grand bulk of the variance of the returns of these assets can be explained by the first two principal components.
Now we collect the first two principal components and plot their contributions.
Step13: From these principal components we can construct "statistical risk factors", similar to more conventional common risk factors. These should give us an idea of how much of the portfolio's returns comes from some unobservable statistical feature.
Step14: The factor returns here are an analogue to the principal component matrix $\mathbf{V}$ in the image processing example.
Step15: The factor exposures are an analogue to the eigenvector matrix $\mathbf{L}$ in the image processing example. | <ASSISTANT_TASK:>
Python Code:
from numpy import linalg as LA
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Principal Component Analysis
by Rene Zhang and Max Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
https://github.com/quantopian/research_public
Applications in many fields, such as image processing, bioinformatics, and quantitative finance, involve large-scale data. Both the size and complexity of this data can make the computations required for analysis practically infeasible. Principal Component Analysis (PCA) is a classical method for dimension reduction. It uses the first several principal components, statistical features that explain most of the variation of a $m \times n$ data matrix $\mathbf{X}$, to describe the large-scale data matrix $\mathbf{X}$ economically.
End of explanation
def generate_test_image(m,n):
X = np.zeros((m,n))
# generate a rectangle
X[25:80,25:80] = 1
# generate a triangle
for i in range(25, 80, 1):
X[i+80:160, 100+i-1] = 2
# generate a circle
for i in range(0,200,1):
for j in range(0,200,1):
if ((i - 135)*(i - 135) +(j - 53)*(j - 53) <= 900):
X[i, j] = 3
return X
X = generate_test_image(200,200)
Explanation: We will introduce PCA with an image processing example. A grayscale digital image can be represented by a matrix, whose $(i,j)^{th}$ entry corresponds to the measurement of gray
scale at the $(i,j)^{th}$ pixel. The following gray-scale image has $200 \times 200$ pixels, though it can be changed on the fly. We store it in a matrix $\mathbf{X}$. The number of rows of the $\mathbf{X}$ is $200$, and the number of columns of $\mathbf{X}$ is $200$.
End of explanation
imgplot = plt.imshow(X, cmap='gray')
plt.title('Original Test Image');
m = X.shape[0] # num of rows
n = X.shape[1] # num of columns
Explanation: We start with a simple checkboard pattern, add some random normal noise, and add a gradient.
End of explanation
X = np.asarray(X, dtype=np.float64)
C = np.cov(X)
np.linalg.matrix_rank(C)
Explanation: Set each row as a variable, with observations in the columns. Denote the covariance matrix of $\mathbf{X}$ as $\mathbf{C}$, where the size of $\mathbf{C}$ is $m \times m$. $\mathbf{C}$ is a matrix whose $(i,j)^{th}$ entry is the covariance between the $i^{th}$ row and $j^{th}$ row of the matrix $\mathbf{X}$.
End of explanation
P, L = LA.eigh(C)
Explanation: Performing principal component analysis decomposes the matrix $\mathbf{C}$ into:
$$\mathbf{C} = \mathbf{L}\mathbf{P}\mathbf{L}^{\top},$$
where $\mathbf{P}$ is a diagonal matrix $\mathbf{P}=\text{diag}(\lambda_1,\lambda_2,\dots,\lambda_m)$, with $\lambda_1 \geq \lambda_1 \geq \dots \lambda_m \geq 0$ being the eigenvalues of matrix $\mathbf{C}$. The matrix $\mathbf{L}$ is an orthogonal matrix, consisting the eigenvectors of matrix $\mathbf{C}$.
End of explanation
P = P[::-1]
L = L[:,::-1]
np.allclose(L.dot(np.diag(P)).dot(L.T), C)
Explanation: The function LA.eigh lists the eigenvalues from small to large in $P$. Let us change the order first to list them from largest to smallest and make sure that $\mathbf{L}\mathbf{P}\mathbf{L}^{\top}==\mathbf{C}$.
End of explanation
plt.semilogy(P, '-o')
plt.xlim([1, P.shape[0]])
plt.xlabel('eigenvalue index')
plt.ylabel('eigenvalue in a log scale')
plt.title('Eigenvalues of Covariance Matrix');
Explanation: Here we plot all of the eigenvalues:
End of explanation
V = L.T.dot(X)
V.shape
Explanation: The $i^{th}$ principal component is given as $i^{th}$ row of $\mathbf{V}$,
$$\mathbf{V} =\mathbf{L}^{\top} \mathbf{X}.$$
End of explanation
k = 200
X_tilde = L[:,0:k-1].dot(L[:,0:k-1].T).dot(X)
np.allclose(X_tilde, X)
plt.imshow(X_tilde, cmap='gray')
plt.title('Approximated Image with full rank');
Explanation: If we multiply both sides on the left by $\mathbf{L}$, we get the following:
$$\mathbf{L}\mathbf{L}^{\top} \mathbf{X}= \mathbf{L}\mathbf{V}.$$
The matrix $\mathbf{L}$ is the set of eigenvectors from a covariance matrix , so $\mathbf{L}\mathbf{L}^{\top} = \mathbf{I}$ and $\mathbf{L}\mathbf{L}^{\top}\mathbf{X} = \mathbf{X}$. The relationship among matrices of $\mathbf{X}$, $\mathbf{L}$, and $\mathbf{V}$ can be expressed as
$$\mathbf{X} = \mathbf{L}\mathbf{V}.$$
To approximate $\mathbf{X}$, we use $k$ eigenvectors that have largest eigenvalues:
$$\mathbf{X} \approx \mathbf{L[:, 1:k]}\mathbf{L[:, 1:k]}^{\top} \mathbf{X}.$$
Denote the approximated $\mathbf{X}$ as $\tilde{\mathbf{X}} = \mathbf{L[:, 1:k]}\mathbf{L[:, 1:k]}^{\top} \mathbf{X}$. When $k = m $, the $\tilde{\mathbf{X}}$ should be same as $\mathbf{X}$.
End of explanation
(P/P.sum()).sum()
plt.plot((P/P.sum()).cumsum(), '-o')
plt.title('Cumulative Sum of the Proportion of Total Variance')
plt.xlabel('index')
plt.ylabel('Proportion');
Explanation: The proportion of total variance due to the $i^{th}$ principal component is given by the ratio $\frac{\lambda_i}{\lambda_1 + \lambda_2 + \dots \lambda_m}.$ The sum of proportion of total variance should be $1$. As we defined, $\lambda_i$ is $i^{th}$ entry of $\mathbf{P}$,
$$\sum_{i}\frac{P_i}{\text{trace}(P)} = 1$$
Where the trace$(P)$ is the sum of the diagonal of $P$.
End of explanation
X_tilde_10 = L[:,0:10-1].dot(V[0:10-1,:])
X_tilde_20 = L[:,0:20-1].dot(V[0:20-1,:])
X_tilde_30 = L[:,0:30-1].dot(V[0:30-1,:])
X_tilde_60 = L[:,0:60-1].dot(V[0:60-1,:])
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 12))
ax1.imshow(X_tilde_10, cmap='gray')
ax1.set(title='Approximated Image with k = 10')
ax2.imshow(X_tilde_20, cmap='gray')
ax2.set(title='Approximated Image with k = 20')
ax3.imshow(X_tilde_30, cmap='gray')
ax3.set(title='Approximated Image with k = 30')
ax4.imshow(X_tilde_60, cmap='gray')
ax4.set(title='Approximated Image with k = 60');
Explanation: Recall the number of principal components is denoted as $k$. Let $k$ be $10, 20, 30, 60$ as examples and take a look at the corresponding approximated images.
End of explanation
symbol = ['IBM','MSFT', 'FB', 'T', 'INTC', 'ABX','NEM', 'AU', 'AEM', 'GFI']
start = "2015-09-01"
end = "2016-11-01"
portfolio_returns = get_pricing(symbol, start_date=start, end_date=end, fields="price").pct_change()[1:]
from sklearn.decomposition import PCA
num_pc = 2
X = np.asarray(portfolio_returns)
[n,m] = X.shape
print 'The number of timestamps is {}.'.format(n)
print 'The number of stocks is {}.'.format(m)
pca = PCA(n_components=num_pc) # number of principal components
pca.fit(X)
percentage = pca.explained_variance_ratio_
percentage_cum = np.cumsum(percentage)
print '{0:.2f}% of the variance is explained by the first 2 PCs'.format(percentage_cum[-1]*100)
pca_components = pca.components_
Explanation: The number of variables in $X$ is $200$. When reducing the dimension to $k=60$, which uses half of the principal components, the approximated image is close to the original one.
Moving forward, we do not have to do PCA by hand. Luckly, scikit-learn has an implementation that we can use. Next, let us show an example in quantitative finance using sklearn.
PCA on a Portfolio
Construct a portfolio with 10 stocks, IBM, MSFT, FB, T, INTC, ABX, NEM, AU, AEM, GFI. 5 of them are technology related and 5 of them are gold mining companies.
In this case, there are 10 variables (companies), and each column is a variable.
End of explanation
x = np.arange(1,len(percentage)+1,1)
plt.subplot(1, 2, 1)
plt.bar(x, percentage*100, align = "center")
plt.title('Contribution of principal components',fontsize = 16)
plt.xlabel('principal components',fontsize = 16)
plt.ylabel('percentage',fontsize = 16)
plt.xticks(x,fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim([0, num_pc+1])
plt.subplot(1, 2, 2)
plt.plot(x, percentage_cum*100,'ro-')
plt.xlabel('principal components',fontsize = 16)
plt.ylabel('percentage',fontsize = 16)
plt.title('Cumulative contribution of principal components',fontsize = 16)
plt.xticks(x,fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim([1, num_pc])
plt.ylim([50,100]);
Explanation: Notice that the grand bulk of the variance of the returns of these assets can be explained by the first two principal components.
Now we collect the first two principal components and plot their contributions.
End of explanation
factor_returns = X.dot(pca_components.T)
factor_returns = pd.DataFrame(columns=["factor 1", "factor 2"],
index=portfolio_returns.index,
data=factor_returns)
factor_returns.head()
Explanation: From these principal components we can construct "statistical risk factors", similar to more conventional common risk factors. These should give us an idea of how much of the portfolio's returns comes from some unobservable statistical feature.
End of explanation
factor_exposures = pd.DataFrame(index=["factor 1", "factor 2"],
columns=portfolio_returns.columns,
data = pca.components_).T
factor_exposures
Explanation: The factor returns here are an analogue to the principal component matrix $\mathbf{V}$ in the image processing example.
End of explanation
labels = factor_exposures.index
data = factor_exposures.values
plt.subplots_adjust(bottom = 0.1)
plt.scatter(
data[:, 0], data[:, 1], marker='o', s=300, c='m',
cmap=plt.get_cmap('Spectral'))
plt.title('Scatter Plot of Coefficients of PC1 and PC2')
plt.xlabel('factor exposure of PC1')
plt.ylabel('factor exposure of PC2')
for label, x, y in zip(labels, data[:, 0], data[:, 1]):
plt.annotate(
label,
xy=(x, y), xytext=(-20, 20),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0')
);
Explanation: The factor exposures are an analogue to the eigenvector matrix $\mathbf{L}$ in the image processing example.
End of explanation
<END_TASK> |
15,930 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | <ASSISTANT_TASK:>
Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
# TODO: Implement Function
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
# TODO: Implement Function
alpha=0.2
# TODO: Implement Function
with tf.variable_scope('discriminator', reuse=reuse):
# input 28x28x3
x1 = tf.layers.conv2d(images, 64, 5, strides=1,padding='same')
relu1 = tf.maximum(alpha*x1, x1)
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
x3 = tf.layers.conv2d(relu2, 256, 5, strides=1, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha*bn3, bn3)
flat = tf.reshape(relu3, (-1, 7*7*512))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
End of explanation
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
# TODO: Implement Function
alpha = 0.2
# TODO: Implement Function
with tf.variable_scope('generator', reuse=False if is_train==True else True):
x1 = tf.layers.dense(z, 7*7*512)
x1 = tf.reshape(x1, (-1, 7, 7, 512))
x1 = tf.layers.batch_normalization(x1, training=is_train)
x1 = tf.maximum(alpha*x1, x1)
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides = 1, padding='same')
x2 = tf.layers.batch_normalization(x2, training=is_train)
x2 = tf.maximum(alpha*x2, x2)
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides = 2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=is_train)
x3 = tf.maximum(alpha*x3, x3)
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides = 2, padding='same')
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
# TODO: Implement Function
g_model = generator(input_z, out_channel_dim, is_train=True)
d_model_real, d_logits_real = discriminator(input_real, reuse=False)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# TODO: Implement Function
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
all_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
g_update_ops = [var for var in all_update_ops if var.name.startswith('generator')]
d_update_ops = [var for var in all_update_ops if var.name.startswith('discriminator')]
with tf.control_dependencies(d_update_ops):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(g_update_ops):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
# TODO: Build Model
z_size = z_dim
steps = 0
input_real, input_z, testt_ = model_inputs(*data_shape[1:4], z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
steps += 1
batch_images = 2*batch_images
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_z: batch_z})
if steps % 10 == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
if steps % 100 == 0:
show_generator_output(sess, 6, input_z, data_shape[3], data_image_mode)
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = 32
z_dim = 100
learning_rate = 0.002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
batch_size = 128
z_dim = 100
learning_rate = 0.002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation
<END_TASK> |
15,931 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing
Step26: Test accuracy | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels_orig = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
import pickle
with open('../embeddings/word_embeddings.pkl', 'rb') as f:
[vocab_to_int, embed_mat] = pickle.load(f)
embed_mat.shape
bak = embed_mat[vocab_to_int['the'],:]
# Reorganize so that 0 can be the empty string.
embed_mat = np.concatenate((np.random.uniform(-1,1, (1,embed_mat.shape[1])),
embed_mat),
axis=0)
vocab_to_int = {k:v+1 for k,v in vocab_to_int.items()}
# embed_mat = embed_mat.copy()
# embed_mat.resize((embed_mat.shape[0]+1, embed_mat.shape[1]))
# embed_mat[-1,:] = embed_mat[0]
# embed_mat[0,:] = np.random.uniform(-1,1, (1,embed_mat.shape[1]))
# embed_mat.shape
vocab_to_int[''] = 0
assert(all(bak == embed_mat[vocab_to_int['the'],:]))
[k for k,v in vocab_to_int.items() if v == 0]
embed_mat[vocab_to_int['stupid'],:]
non_words = set(['','.','\n'])
extra_words = set([w for w in set(words) if w not in vocab_to_int and w not in non_words])
new_vocab = [(word, index) for index,word in enumerate(extra_words, len(vocab_to_int))]
embed_mat = np.concatenate(
(embed_mat,
np.random.uniform(-1,1, (len(extra_words), embed_mat.shape[1]))),
axis=0)
print("added {} extra words".format(len(extra_words)))
vocab_to_int.update(new_vocab)
del extra_words
del new_vocab
37807/63641
reviews_ints = [[vocab_to_int[word] for word in review.split(' ') if word not in non_words] for review in reviews]
set([word for word in set(words) if word not in vocab_to_int])
len(vocab_to_int)
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = np.array([(0 if l == 'negative' else 1) for l in labels_orig.split('\n')])
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
x = [1,2,3]
x[:10]
# Filter out that review with 0 length
new_values = [(review_ints[:200], label) for review_ints,label
in zip(reviews_ints, labels)
if len(review_ints) > 0]
reviews_ints, labels = zip(*new_values)
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features = np.array([([0] * (seq_len-len(review))) + review for review in reviews_ints])
labels = np.array(labels)
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
review = reviews_ints[0]
len(review)
features[:10,:]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
split_tv = int(features.shape[0] * split_frac)
split_vt = int(round(features.shape[0] * (1-split_frac) / 2)) + split_tv
train_x = features[:split_tv,:]
train_y = labels[:split_vt ]
val_x = features[split_tv:split_vt,:]
val_y = labels[split_tv:split_vt]
test_x = features[split_vt:,:]
test_y = labels[split_vt: ]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
#run_number = 7
if 'run_number' in locals():
run_number += 1
else:
run_number = 1
run_number
lstm_size = 512
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, shape=(None,seq_len), name='inputs')
labels_ = tf.placeholder(tf.int32, shape=(None,1), name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
n_words
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
import os
os.makedirs('./logs/{}/val'.format(run_number), exist_ok=True)
word_order = {v:k for k,v in vocab_to_int.items()}
embedding_metadata_file = './logs/{}/val/metadata.tsv'.format(run_number)
with open(embedding_metadata_file, 'w') as f:
for i in range(len(word_order)):
f.write(word_order[i]+'\n')
projector_config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig()
embedding_config = projector_config.embeddings.add()
embed_mat.dtype
# Size of the embedding vectors (number of units in the embedding layer)
if False:
embed_size = 300
with graph.as_default():
with tf.name_scope('embedding'):
embedding = tf.Variable(
tf.random_uniform((n_words,embed_size),
-1,1), name="word_embedding")
embedding_config.tensor_name = embedding.name
embedding_config.metadata_path = embedding_metadata_file
embed = tf.nn.embedding_lookup(embedding, inputs_)
tf.summary.histogram('embedding', embedding)
else:
embed_size = embed_mat.shape[1]
with graph.as_default():
with tf.name_scope('embedding'):
embedding = tf.Variable(embed_mat, name="word_embedding", dtype=tf.float32)
embedding_config.tensor_name = embedding.name
embedding_config.metadata_path = embedding_metadata_file
embed = tf.nn.embedding_lookup(embedding, inputs_)
tf.summary.histogram('embedding', embedding)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
with tf.name_scope('LSTM'):
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
with tf.name_scope('LSTM'):
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
with tf.name_scope('Prediction'):
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
with tf.name_scope('Loss'):
cost = tf.losses.mean_squared_error(labels_, predictions)
tf.summary.scalar('cost', cost)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
with tf.name_scope('Accuracy'):
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar('accuracy',accuracy)
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
train_y.mean()
epochs = 20
with graph.as_default():
merged = tf.summary.merge_all()
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/{}/train'.format(run_number), sess.graph)
val_writer = tf.summary.FileWriter('./logs/{}/val'.format(run_number))
tf.contrib.tensorboard.plugins.projector.visualize_embeddings(val_writer, embedding_config)
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
summary, loss, state, _ = sess.run([merged, cost, final_state, optimizer],
feed_dict=feed)
train_writer.add_summary(summary, iteration)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
summary, batch_acc, val_state = sess.run([merged, accuracy, final_state],
feed_dict=feed)
val_acc.append(batch_acc)
val_writer.add_summary(summary, iteration)
saver.save(sess, './logs/{}/model.ckpt'.format(run_number), iteration)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
train_writer.flush()
val_writer.flush()
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation
print(features.shape)
def TestSomeText(text):
text = text.lower()
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
text_ints = [vocab_to_int[word] for word in text.split(' ') if word in vocab_to_int]
print(len(text_ints))
text_ints = text_ints[:seq_len]
#print(text_ints)
#text_features = np.zeros((batch_size,seq_len))
text_features = np.array([([0] * (seq_len-len(text_ints))) + text_ints] * batch_size)
#print(text_features)
#print(text_features.shape)
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for i in range(1):
feed = {inputs_: text_features,
labels_: [[0]]*batch_size,
keep_prob: 1,
initial_state: test_state}
pred, mycost, test_state = sess.run([predictions, accuracy, final_state], feed_dict=feed)
return pred[0,0]
TestSomeText("highly underrated movie")
#pred[0,0]
TestSomeText('overrated movie')
TestSomeText(I ve been looking forward to a viking film or TV series for many years
and when my wishes were finally granted, I was very worried that this production
was going to be total crap. After viewing the first two episodes I do not worry
about that anymore. Thank you, Odin
As a person of some historical knowledge of the viking era, I can point out numerous
flaws - but they don't ruin the story for me, so I will let them slip. Historical
accounts about those days are, after all, not entirely reliable.
Happy to see Travis Fimmel in a role that totally suits him. A physical and intense
character, with that spice of humor that is the viking trademark from the sagas.
Gabriel Byrne plays a stern leader, that made me think of him in "Prince of Jutland",
and Clive Standen seems like he's going to surprise us.
Been pondering the Game of Thrones comparison, since I love that show too, but in my
opinion Vikings has its own thing going on. Way fewer lead characters to begin with,
and also a more straight forward approach. Plenty of room for more series with this
high class!
Can I wish for more than the planned nine episodes, PLEASE!!!)
TestSomeText(vikings)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
TestSomeText(Pirates of the Caribbean has always been a franchise that makes no attempt for Oscar worthy material but in its own way is massively enjoyable.
Pirates of the Caribbean: Dead Men Tell No Tales certainly embraces the aspects of the original movie while also incorporating new plot lines that fit in well with plots from the original story. With the introduction of Henry and Karina there is a new love interest that is provided to the audience that rivals that of Will and Elizabeth Turner's.
Henry Turner is portrayed as an almost exact copy of his father except just a teensy bit worse at sword fighting while Karina differs from the usual women as she remains just as important, if not more, as Henry as she guides the course towards Posiedon's trident.
Jack Sparrow is entertaining as always with his usual drunk characteristics. For those of you who are tired of Sparrow acting this way Don't SEE THE MOVIE Jack sparrow isn't going to change because it doesn't make sense for his character to suddenly change.
All together the movie was expertly written and expertly performed by the entire cast even Kiera Knightely who didn't manage to get one word throughout the whole movie. I know as a major fan of the Pirates of the Caribbean I can't wait to see what happens for the future of the franchise.
)
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText(If your child is a fan of the Wimpy Kid series, they'll no doubt enjoy this one, it's entertaining and lowbrow enough to also appease the moodiest of teens and grumpiest adults.)
TestSomeText(At first I thought the film was going to be just a normal thriller but it turned out to be a thousand times better than I expected. The film is truly original and was so dark & sinister that gives the tensive mood also it is emotionally & psychologically thrilling, the whole movie is charged with pulse pounding suspense and seems like it's really happening. It's amazing that how they managed to make an 80 minute movie with just a guy in a phone booth but the full credit goes to Colin Farrell and Larry Cohen the writer not Joel Schumacher because he is a crappy director. Joel Schumacher's films are rubbish especially The Number 23, Phone Booth was shot in 10 days with a budget of $10 million so it wasn't a hard job to make it, that's why Joel doesn't get any credit but the cast & crew did a fantastic job. I also really liked the raspberry coloured shirt Colin was wearing and it was an excellent choice of clothing because the viewers are going to watch him throughout the whole film. When I first saw the movie I fell in love with it and I bought it on DVD the next day and I've seen it about 20 times and I'm still not fed up with it. Phone Booth is and always will be Colin Farrell's best film! Overall it is simply one of my favourite films and I even argued over my friend because he didn't like it.
)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
text
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText(There are few quality movies or series about the Vikings, and this series is outstanding and well worth waiting for. Not only is Vikings a series that is a joy to watch, it is also a series that is easy to recommend. I personally feel that the creator and producers did a fine job of giving the viewer quality material. Now, there are a few inconsistencies with the series, most notably would be the idea that Vikings had very little knowledge of other European countries and were amazed by these people across the big waters. In reality Vikings engaged in somewhat normal commercial activities with other Anglo-Saxons, so the idea that Vikings were as amazed as they seemed when they realize that other people were out there is not that realistic. However, it is this small inconsistency that goes a long way in holding the premise together. I simply love the series and would recommend it to anyone wanting to watch a quality show.)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText(This movie didn't feel any different from the other anime movies out there. Sure, the sibling dynamics were good, as well as the family values, the childhood memories and older brother anxiety. The main idea was interesting, with the new baby seeming rather like a boss sent into the family to spy on the parents and solve a big problem for his company. You can't help but identify with the older kid, especially if you have younger siblings. But eventually, the action was a bit main stream. The action scenes were not original and kind of boring. Other than that, the story became a little complicated when you start to think about what's real and what's not. The narration was good and the animation was nice, with the cute babies and puppies. So, 4 out of 10.
)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
text
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText('seriously awesome movie')
Explanation: Test accuracy: 0.748
Test accuracy: 0.784
End of explanation
<END_TASK> |
15,932 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Workshop – Introduction to Neural Networks using TensorFlow and Keras
http
Step1: How does Tensorflow Low Level API look like?
Step2: Interactive usage of Low Level API
Step3: Calling a TensorFlow Model deployed on Google Cloud ML Service
Step4: What can Keras do for us?
Abstracting from defining operations on tensors
Just define the network architecture and you are done
Keras compile the description of the model down to TensorFlow operations
Many other backends are supported
https | <ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
import pandas as pd
print(pd.__version__)
assert StrictVersion(pd.__version__) >= StrictVersion('0.20.0')
Explanation: Workshop – Introduction to Neural Networks using TensorFlow and Keras
http://www.mcubed.london/sessions/workshop-introduction-to-neural-networks-using-tensorflow-and-keras/
<h1 style="color:red">Executing these notebooks on Azure is fine, however, for improved performance and reliability it is best to run it on a local machine or, ideally, on more powerful GPU hardware</h1>
<h2 style="color:red">You can download a complete library from Azure Notebooks as a zip</h2>
(local) installation goes like this:
Install Anaconda 3 for Python 3.6: https://www.anaconda.com/download/
Install TensorFlow for Anaconda: https://www.tensorflow.org/install/install_linux#InstallingAnaconda
e.g. like pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0-cp36-cp36m-linux_x86_64.whl
Install Keras using pip: pip install keras --upgrade
<h2 style="color:red">If all checks below pass and you can download the training material you are good to go, no matter what platform you are on</h2>
Notebook Basics
This is a Jupyter Notebook
You type commands which will be executed on a server (possibly remote)
A notebook saves all output of the commands executed, thus you can view it including results without executing it
Before using a notebook to type or edit commands make sure you execute all code
Do this by selecting Kernel, Restart & Run All in the menu
Tips
This is your personal notebook, add Python comments or markdown to take notes
Executing Cells
Everthing can be down using the Menu and Toolbar
Using the keyboard you can be faster, click on the Keyboard Symbol in the toolbar to learn about more short cuts
The most important shortcut is Shit+Enter which executes a cell and navigates to the next one
Preparations
Import and check required dependencies
End of explanation
# graph definition
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
# launching the graph in a session
with tf.Session() as sess:
result = sess.run([product])
print(result)
Explanation: How does Tensorflow Low Level API look like?
End of explanation
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.subtract(x, a)
print(sub.eval())
# ==> [-2. -1.]
# Close the Session when we're done.
sess.close()
Explanation: Interactive usage of Low Level API
End of explanation
!cat sample_iris.json
# Example for iris, model exported as Tensorflow
# gsutil cp -R 1 gs://irisnn
# create model and version at https://console.cloud.google.com/mlengine
# in a DOS shell on local machine in this folder
# gcloud ml-engine predict --model=irisnn --json-instances=./sample_iris.json
# SCORES
# [0.9954029321670532, 0.004596732556819916, 3.3544753819114703e-07]
Explanation: Calling a TensorFlow Model deployed on Google Cloud ML Service
End of explanation
!curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/speed-limit-signs.zip
from zipfile import ZipFile
zip = ZipFile(r'speed-limit-signs.zip')
zip.extractall('.')
# Calling a Keras based model to classify speed limit signs
!curl -H "Content-Type: application/json" -X GET -d '{"url": "https://github.com/DJCordhose/speed-limit-signs/raw/master/data/real-world/4/100-sky-cutoff-detail.jpg", "model": "default" }' http://ec2-52-43-39-37.us-west-2.compute.amazonaws.com:8888
Explanation: What can Keras do for us?
Abstracting from defining operations on tensors
Just define the network architecture and you are done
Keras compile the description of the model down to TensorFlow operations
Many other backends are supported
https://github.com/fchollet/keras#getting-started-30-seconds-to-keras
Our task for today
Train a system to classify speed limit signs:
* We have a pretty realistic example
* https://twitter.com/art_sobolev/status/907857395757481985?s=03: I don't think it's fine to only list experiments on toy datasets, and hide failures on more complicated cases.
* we need to find ways to keep our models simple to let them train in a few minutes
Getting the data
End of explanation
<END_TASK> |
15,933 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Qualité de l'air mesurée dans la station Châtelet
Source de données utilisée (fichiers CSV en open data)
Step1: Infos diverses sur le DataFrame
Step2: Analyse de la concentration en particules fines (PM10) | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#%matplotlib notebook
import matplotlib
matplotlib.rcParams['figure.figsize'] = (9, 9)
import pandas as pd
def conv_func(s):
s = s.replace('<', '')
if s == 'ND':
return np.nan
elif s.strip() == '':
return np.nan
else:
return float(s)
url = "https://data.iledefrance.fr/explore/dataset/qualite-de-lair-mesuree-dans-la-station-chatelet/download/?format=csv&timezone=Europe/Berlin&use_labels_for_header=true"
#dtype_dict = {'NO': np.float64,
# 'NO2': np.float64,
# 'PM10': np.float64,
# 'CO2': np.float64,
# 'TEMP': np.float64,
# 'HUMI': np.float64}
converter_dict = {'NO': conv_func,
'NO2': conv_func,
'PM10': conv_func,
'CO2': conv_func,
'TEMP': conv_func,
'HUMI': conv_func}
df = pd.read_csv(url,
#encoding='iso-8859-1',
index_col=0,
sep=';',
decimal=',',
parse_dates=["DATE/HEURE"],
#dtype=dtype_dict,
#na_values='ND',
converters=converter_dict)
df = df.sort_index()
Explanation: Qualité de l'air mesurée dans la station Châtelet
Source de données utilisée (fichiers CSV en open data): https://data.iledefrance.fr/explore/dataset/qualite-de-lair-mesuree-dans-la-station-chatelet/
Récupération des données
End of explanation
df.head()
df.columns
df.dtypes
df.index
Explanation: Infos diverses sur le DataFrame
End of explanation
df.PM10.plot(figsize=(18,6));
df.PM10.resample('7D').mean().plot(figsize=(18,6));
df.PM10.rolling('7D').mean().plot(figsize=(18,6));
df.PM10.resample('1M').mean().plot(figsize=(18,6));
ts = df.PM10
# https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html#Digging-into-the-data
ts_mean = ts.groupby(ts.index.time).mean()
ts_median = ts.groupby(ts.index.time).median()
ts_quartile_1 = ts.groupby(ts.index.time).quantile(0.25)
ts_quartile_3 = ts.groupby(ts.index.time).quantile(0.75)
ts_percentile_5 = ts.groupby(ts.index.time).quantile(0.05)
ts_percentile_95 = ts.groupby(ts.index.time).quantile(0.95)
ts_min = ts.groupby(ts.index.time).min()
ts_max = ts.groupby(ts.index.time).max()
color = "blue"
ax = ts_mean.plot(y='duration', figsize=(18, 12), color=color, label="mean", alpha=0.75)
ts_median.plot(ax=ax, color=color, label="median", style="--", alpha=0.75)
ts_quartile_1.plot(ax=ax, color=color, alpha=0.5, style="-.", label="1st quartile")
ts_quartile_3.plot(ax=ax, color=color, alpha=0.5, style="-.", label="3rd quartile")
ts_percentile_5.plot(ax=ax, color=color, alpha=0.25, style=":", label="5th percentile")
ts_percentile_95.plot(ax=ax, color=color, alpha=0.25, style=":", label="95th percentile")
ts_min.plot(ax=ax, color=color, alpha=0.2, style=":", label="min")
ts_max.plot(ax=ax, color=color, alpha=0.2, style=":", label="max")
plt.fill_between(ts_percentile_5.index, ts_percentile_5.values, ts_percentile_95.values, facecolor=color, alpha=0.1)
plt.fill_between(ts_quartile_1.index, ts_quartile_1.values, ts_quartile_3.values, facecolor=color, alpha=0.1)
ts = df.TEMP
ax2 = ax.twinx()
# https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html#Digging-into-the-data
ts_mean = ts.groupby(ts.index.time).mean()
ts_median = ts.groupby(ts.index.time).median()
ts_quartile_1 = ts.groupby(ts.index.time).quantile(0.25)
ts_quartile_3 = ts.groupby(ts.index.time).quantile(0.75)
ts_percentile_5 = ts.groupby(ts.index.time).quantile(0.05)
ts_percentile_95 = ts.groupby(ts.index.time).quantile(0.95)
ts_min = ts.groupby(ts.index.time).min()
ts_max = ts.groupby(ts.index.time).max()
color = "red"
ax2 = ts_mean.plot(y='duration', figsize=(18, 12), color=color, label="mean", alpha=0.75)
ts_median.plot(ax=ax2, color=color, label="median", style="--", alpha=0.75)
ts_quartile_1.plot(ax=ax2, color=color, alpha=0.5, style="-.", label="1st quartile")
ts_quartile_3.plot(ax=ax2, color=color, alpha=0.5, style="-.", label="3rd quartile")
ts_percentile_5.plot(ax=ax2, color=color, alpha=0.25, style=":", label="5th percentile")
ts_percentile_95.plot(ax=ax2, color=color, alpha=0.25, style=":", label="95th percentile")
ts_min.plot(ax=ax2, color=color, alpha=0.2, style=":", label="min")
ts_max.plot(ax=ax2, color=color, alpha=0.2, style=":", label="max")
plt.fill_between(ts_percentile_5.index, ts_percentile_5.values, ts_percentile_95.values, facecolor=color, alpha=0.1)
plt.fill_between(ts_quartile_1.index, ts_quartile_1.values, ts_quartile_3.values, facecolor=color, alpha=0.1)
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
ax.set_xlabel('Time')
ax.set_ylabel('PM10');
ax2.set_ylabel('Temperature');
ax = df.PM10.groupby(df.index.time).mean().plot(figsize=(18,6), color="blue")
ax.set_xlabel("Time")
ax2 = ax.twinx()
df.TEMP.groupby(df.index.time).mean().plot(ax=ax2, color="red")
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
ax = df.PM10.groupby(df.index.weekday).mean().plot(figsize=(18,6), color="blue")
ax.set_xlabel("Weekday")
ax2 = ax.twinx()
df.TEMP.groupby(df.index.weekday).mean().plot(ax=ax2, color="red")
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
ax = df.PM10.groupby(df.index.month).mean().plot(figsize=(18,6), color="blue")
ax.set_xlabel("Month")
ax2 = ax.twinx()
df.TEMP.groupby(df.index.month).mean().plot(ax=ax2, color="red")
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
Explanation: Analyse de la concentration en particules fines (PM10)
End of explanation
<END_TASK> |
15,934 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Factorial HMM
Example synthetic data
Step1: Test out learned distribution inside of SMC
We'll compare it against a baseline of "bootstrap" SMC, which proposes from the transition dynamics of the individual HMMs.
Step2: Look at rate of path coalescence | <ASSISTANT_TASK:>
Python Code:
devices = factorial_hmm.gen_devices()
T = 50
np.random.seed(20)
X, Y = factorial_hmm.gen_dataset(devices, T)
plt.figure(figsize=(15,3.5))
plt.plot(Y)
plt.figure(figsize=(15,10))
plt.imshow((X*devices).T, interpolation='None', aspect=1);
plt.yticks(np.arange(len(devices)), devices);
print len(devices), 2**len(devices)
trace_train = []
trace_validation = []
dist_est = cde.ConditionalBinaryMADE(len(devices)+1, len(devices), H=300, num_layers=4)
if USE_GPU:
dist_est.cuda()
dist_est.load_state_dict(torch.load('../saved/trained_hmm_params.rar'))
Explanation: Factorial HMM
Example synthetic data: 20 different "devices", each with different power consumptions, turning on and off following separate Markov models
End of explanation
X_hat_bootstrap, ancestry_bootstrap, ESS_bootstrap = \
factorial_hmm.run_smc(devices, Y, 500, factorial_hmm.baseline_proposal, verbose=False)
Y_hat_bootstrap = np.dot(X_hat_bootstrap, devices)
nn_proposal = factorial_hmm.make_nn_proposal(dist_est)
X_hat_nn, ancestry_nn, ESS_nn = \
factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)
Y_hat_nn = np.dot(X_hat_nn, devices)
plt.hist(ESS_bootstrap, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')
plt.hist(ESS_nn, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')
plt.xlim([0,plt.xlim()[1]])
plt.legend(['bootstrap', 'nnsmc'])
plt.title('Histogram of effective sample size of SMC filtering distribution');
plt.figure(figsize=(16,4))
plt.title('Ancestral paths for bootstrap proposals (blue) and nn (green)')
plt.plot(ancestry_bootstrap.T, color=sns.color_palette()[0]);
plt.plot(ancestry_nn.T, color=sns.color_palette()[1]);
plt.ylim(0,ancestry_nn.shape[0])
plt.xlim(0,T-1);
plt.figure(figsize=(14,3.25))
plt.plot(np.dot(X_hat_nn, devices).T, color=sns.color_palette()[1], alpha=0.1)
plt.plot(np.arange(len(Y)), Y,'k--')
plt.xlim([0,T-1])
plt.xlabel('Time step')
plt.ylabel('Total energy usage')
Explanation: Test out learned distribution inside of SMC
We'll compare it against a baseline of "bootstrap" SMC, which proposes from the transition dynamics of the individual HMMs.
End of explanation
ANC_PRIOR = []
ANC_NN = []
def count_uniques(ancestry):
K, T = ancestry.shape
counts = np.empty((T,), dtype=int)
for t in xrange(T):
counts[t] = len(np.unique(ancestry[:,t]))
return counts
def run_iter():
X,Y = factorial_hmm.gen_dataset(devices, T=30)
X_particles_baseline, ancestry_baseline, _ = \
factorial_hmm.run_smc(devices, Y, 100, factorial_hmm.baseline_proposal, verbose=False)
print "smc complete"
X_particles, ancestry_nnsmc, _ = \
factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)
print "nn complete"
ANC_PRIOR.append(count_uniques(ancestry_baseline))
ANC_NN.append(count_uniques(ancestry_nnsmc))
return X,Y
for i in xrange(10):
print "iteration", i+1
X_tmp, Y_tmp = run_iter()
plt.figure(figsize=(8,3.5))
plt.plot(np.arange(len(X_tmp)), np.mean(ANC_PRIOR, 0));
plt.plot(np.arange(len(X_tmp)), np.mean(ANC_NN, 0));
plt.legend(['Bootstrap SMC', 'NN-SMC'], loc='upper left')
pm = np.mean(ANC_PRIOR, 0)
psd = np.std(ANC_PRIOR, 0)
safe_lb = (pm - psd) * (pm - psd > 1.0) + (pm - psd <= 1.0)
plt.fill_between(np.arange(len(X_tmp)), safe_lb, pm+psd, alpha=0.25, color=sns.color_palette()[0]);
pm = np.mean(ANC_NN, 0)
psd = np.std(ANC_NN, 0)
plt.fill_between(np.arange(len(X_tmp)), pm-psd, pm+psd, alpha=0.25, color=sns.color_palette()[1]);
plt.semilogy();
plt.xlabel('Time step')
plt.ylabel('Surviving paths')
plt.ylim(1, 100)
plt.xlim(0, len(X_tmp)-1)
plt.tight_layout()
Explanation: Look at rate of path coalescence
End of explanation
<END_TASK> |
15,935 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Gaussian Process Regression in Pytorch
Thomas Viehmann, tv@lernapparat.de
Modelled after GPFlow Regression notebook by James Hensman
Step1: Let's have a regression example
Step2: Creating the model
Not adapted to the data yet...
Step3: Maximum-A-Posteriori
One commonly used approach to model selection is to maximize the marginal log likelihood. This is the "gp" equivalent of a maximum-likelihood estimate.
Step4: Hamiltonian Monte Carlo
We can go more Bayesian by putting a prior on the parameters and do Hamiltonian Monte Carlo to draw parameters.
Step5: Plotting simulated functions
(Note that the simulations are for the de-noised functions - i.e. without the noise contribution of the likelihood.)
Step6: Sparse Regression | <ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot
%matplotlib inline
import IPython
import torch
import numpy
import sys, os
sys.path.append(os.path.join(os.getcwd(),'..'))
pyplot.style.use('ggplot')
import candlegp
import candlegp.training.hmc
Explanation: Gaussian Process Regression in Pytorch
Thomas Viehmann, tv@lernapparat.de
Modelled after GPFlow Regression notebook by James Hensman
End of explanation
N = 12
X = torch.rand(N,1).double()
Y = (torch.sin(12*X) + 0.6*torch.cos(25*X) + torch.randn(N,1).double()*0.1+3.0).squeeze(1)
pyplot.figure()
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Let's have a regression example
End of explanation
k = candlegp.kernels.Matern52(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m = candlegp.models.GPR(X, Y.unsqueeze(1), kern=k, mean_function=mean)
m.likelihood.variance.set(torch.tensor([0.01], dtype=torch.double))
m
xstar = torch.linspace(0,1,100).double()
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Creating the model
Not adapted to the data yet...
End of explanation
opt = torch.optim.LBFGS(m.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.item())
m
xstar = torch.linspace(0,1,100).double()
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Maximum-A-Posteriori
One commonly used approach to model selection is to maximize the marginal log likelihood. This is the "gp" equivalent of a maximum-likelihood estimate.
End of explanation
k2 = candlegp.kernels.RBF(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean2 = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m2 = candlegp.models.GPR(X, Y.unsqueeze(1), kern=k2, mean_function=mean2)
m2.load_state_dict(m.state_dict())
dt = torch.double
m2.likelihood.variance.prior = candlegp.priors.Gamma(1.0,1.0, dtype=dt)
m2.kern.variance.prior = candlegp.priors.Gamma(1.0,1.0, dtype=dt)
m2.kern.lengthscales.prior = candlegp.priors.Gamma(1.0,1.0,dtype=dt)
m2.mean_function.A.prior = candlegp.priors.Gaussian(0.0,10.0, dtype=dt)
m2.mean_function.b.prior = candlegp.priors.Gaussian(0.0,10.0, dtype=dt)
print("likelihood with priors",m2().item())
m2
# res = candlegp.training.hmc.hmc_sample(m2,500,0.2,burn=50, thin=10)
res = candlegp.training.hmc.hmc_sample(m2,50,0.2,burn=50, thin=10)
pyplot.plot(res[0]); pyplot.title("likelihood");
for (n,p0),p,c in zip(m.named_parameters(),res[1:],['r','g','b','y','b']):
pyplot.plot(torch.stack(p).squeeze().numpy(), c=c, label=n)
pyplot.plot((0,len(p)),(p0.data.view(-1)[0],p0.data.view(-1)[0]), c=c)
pyplot.legend();
Explanation: Hamiltonian Monte Carlo
We can go more Bayesian by putting a prior on the parameters and do Hamiltonian Monte Carlo to draw parameters.
End of explanation
xstar = torch.linspace(0,1,100).double()
mc_params = torch.stack([torch.cat(p, dim=0).view(-1) for p in res[1:]], dim=1)
allsims = []
for ps in mc_params[:50]:
for mp, p in zip(m2.parameters(), ps):
with torch.no_grad():
mp.set(p)
allsims.append(m2.predict_f_samples(xstar.unsqueeze(1), 1).squeeze(0).t())
allsims = torch.cat(allsims, dim=0)
pyplot.plot(xstar.numpy(),allsims.data.numpy().T, 'b', lw=2, alpha=0.1)
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Plotting simulated functions
(Note that the simulations are for the de-noised functions - i.e. without the noise contribution of the likelihood.)
End of explanation
k3 = candlegp.kernels.RBF(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean3 = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m3 = candlegp.models.SGPR(X, Y.unsqueeze(1), k3, X[:7].clone(), mean_function=mean3)
m3.likelihood.variance.set(torch.tensor([0.01], dtype=torch.double))
m3
opt = torch.optim.LBFGS(m3.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m3()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m3()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.item())
m3
xstar = torch.linspace(0,1,100).double()
mu, var = m3.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
pyplot.plot(m3.Z.data.numpy(), torch.zeros(m3.Z.size(0)).numpy(),'o')
Explanation: Sparse Regression
End of explanation
<END_TASK> |
15,936 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basic Principles of Machine Learning
Here we'll dive into the basic principles of machine learning, and how to
utilize them via the Scikit-Learn API.
After briefly introducing scikit-learn's Estimator object, we'll cover supervised learning, including classification and regression problems, and unsupervised learning, including dimensionality reduction and clustering problems.
Step1: The Scikit-learn Estimator Object
Every algorithm is exposed in scikit-learn via an ''Estimator'' object. For instance a linear regression is implemented as so
Step2: Estimator parameters
Step3: Estimated Model parameters
Step4: The model found a line with a slope 2 and intercept 1, as we'd expect.
Supervised Learning
Step5: You can also do probabilistic predictions
Step6: Exercise
Use a different estimator on the same problem
Step7: As above, we can plot a line of best fit
Step8: Scikit-learn also has some more sophisticated models, which can respond to finer features in the data
Step9: Whether either of these is a "good" fit or not depends on a number of things; we'll discuss details of how to choose a model later in the tutorial.
Exercise
Explore the RandomForestRegressor object using IPython's help features (i.e. put a question mark after the object).
What arguments are available to RandomForestRegressor?
How does the above plot change if you change these arguments?
These class-level arguments are known as hyperparameters, and we will discuss later how you to select hyperparameters in the model validation section.
Unsupervised Learning
Step10: Clustering
Step11: Recap
Step12: A more useful way to look at the results is to view the confusion matrix, or the matrix showing the frequency of inputs and outputs
Step13: For each class, all 50 training samples are correctly identified. But this does not mean that our model is perfect! In particular, such a model generalizes extremely poorly to new data. We can simulate this by splitting our data into a training set and a testing set. Scikit-learn contains some convenient routines to do this
Step14: This paints a better picture of the true performance of our classifier
Step15: Original source on the scikit-learn website
Quick Application
Step16: Let's plot a few of these
Step17: Here the data is simply each pixel value within an 8x8 grid
Step18: So our data have 1797 samples in 64 dimensions.
Unsupervised Learning
Step19: We see here that the digits are fairly well-separated in the parameter space; this tells us that a supervised classification algorithm should perform fairly well. Let's give it a try.
Classification on Digits
Let's try a classification task on the digits. The first thing we'll want to do is split the digits into a training and testing sample
Step20: Let's use a simple logistic regression which (despite its confusing name) is a classification algorithm
Step21: We can check our classification accuracy by comparing the true values of the test set to the predictions
Step22: This single number doesn't tell us where we've gone wrong
Step23: We might also take a look at some of the outputs along with their predicted labels. We'll make the bad labels red | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Basic Principles of Machine Learning
Here we'll dive into the basic principles of machine learning, and how to
utilize them via the Scikit-Learn API.
After briefly introducing scikit-learn's Estimator object, we'll cover supervised learning, including classification and regression problems, and unsupervised learning, including dimensionality reduction and clustering problems.
End of explanation
from sklearn.linear_model import LinearRegression
Explanation: The Scikit-learn Estimator Object
Every algorithm is exposed in scikit-learn via an ''Estimator'' object. For instance a linear regression is implemented as so:
End of explanation
model = LinearRegression(normalize=True)
print(model.normalize)
print(model)
model2 = LinearRegression()
model?
Explanation: Estimator parameters: All the parameters of an estimator can be set when it is instantiated, and have suitable default values:
End of explanation
x = np.arange(10)
y = 2 * x + 1
print(x)
print(y)
plt.plot(x, y, 'o');
# The input data for sklearn is 2D: (samples == 10 x features == 1)
X = x[:, np.newaxis]
print(X)
print(y)
# fit the model on our data
model.fit(X, y)
# underscore at the end indicates a fit parameter
print(model.coef_)
print(model.intercept_)
# residual error around fit
model.residues_
model.score(X, y)
Explanation: Estimated Model parameters: When data is fit with an estimator, parameters are estimated from the data at hand. All the estimated parameters are attributes of the estimator object ending by an underscore:
End of explanation
from sklearn import neighbors, datasets
iris = datasets.load_iris()
X, y = iris.data, iris.target
# create the model
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
# fit the model
knn.fit(X, y)
# What kind of iris has 3cm x 5cm sepal and 4cm x 2cm petal?
# call the "predict" method:
result = knn.predict([[5, 3, 4, 2],])
print(iris.target_names[result])
Explanation: The model found a line with a slope 2 and intercept 1, as we'd expect.
Supervised Learning: Classification and Regression
In Supervised Learning, we have a dataset consisting of both features and labels.
The task is to construct an estimator which is able to predict the label of an object
given the set of features. A relatively simple example is predicting the species of
iris given a set of measurements of its flower. This is a relatively simple task.
Some more complicated examples are:
given a multicolor image of an object through a telescope, determine
whether that object is a star, a quasar, or a galaxy.
given a photograph of a person, identify the person in the photo.
given a list of movies a person has watched and their personal rating
of the movie, recommend a list of movies they would like
(So-called recommender systems: a famous example is the Netflix Prize).
What these tasks have in common is that there is one or more unknown
quantities associated with the object which needs to be determined from other
observed quantities.
Supervised learning is further broken down into two categories, classification and regression.
In classification, the label is discrete, while in regression, the label is continuous. For example,
in astronomy, the task of determining whether an object is a star, a galaxy, or a quasar is a
classification problem: the label is from three distinct categories. On the other hand, we might
wish to estimate the age of an object based on such observations: this would be a regression problem,
because the label (age) is a continuous quantity.
Classification Example
K nearest neighbors (kNN) is one of the simplest learning strategies: given a new, unknown observation, look up in your reference database which ones have the closest features and assign the predominant class.
Let's try it out on our iris classification problem:
End of explanation
iris.target_names
knn.predict_proba([[5, 3, 4, 2],])
from fig_code import plot_iris_knn
plot_iris_knn()
Explanation: You can also do probabilistic predictions:
End of explanation
# Create some simple data
import numpy as np
np.random.seed(0)
X = np.random.random(size=(20, 1))
y = 3 * X.squeeze() + 2 + np.random.randn(20)
plt.plot(X.squeeze(), y, 'o');
Explanation: Exercise
Use a different estimator on the same problem: sklearn.svm.SVC.
Note that you don't have to know what it is to use it. We're simply trying out the interface here
If you finish early, try to create a similar plot as above with the SVC estimator.
Regression Example
One of the simplest regression problems is fitting a line to data, which we saw above.
Scikit-learn also contains more sophisticated regression algorithms
End of explanation
model = LinearRegression()
model.fit(X, y)
# Plot the data and the model prediction
X_fit = np.linspace(0, 1, 100)[:, np.newaxis]
y_fit = model.predict(X_fit)
plt.plot(X.squeeze(), y, 'o')
plt.plot(X_fit.squeeze(), y_fit);
Explanation: As above, we can plot a line of best fit:
End of explanation
# Fit a Random Forest
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=10, max_depth=5)
model.fit(X, y)
# Plot the data and the model prediction
X_fit = np.linspace(0, 1, 100)[:, np.newaxis]
y_fit = model.predict(X_fit)
plt.plot(X.squeeze(), y, 'o')
plt.plot(X_fit.squeeze(), y_fit);
Explanation: Scikit-learn also has some more sophisticated models, which can respond to finer features in the data:
End of explanation
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=0.95)
pca.fit(X)
X_reduced = pca.transform(X)
print("Reduced dataset shape:", X_reduced.shape)
import pylab as plt
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], #c=y,
cmap='RdYlBu')
print("Meaning of the 2 components:")
for component in pca.components_:
print(" + ".join("%.3f x %s" % (value, name)
for value, name in zip(component,
iris.feature_names)))
Explanation: Whether either of these is a "good" fit or not depends on a number of things; we'll discuss details of how to choose a model later in the tutorial.
Exercise
Explore the RandomForestRegressor object using IPython's help features (i.e. put a question mark after the object).
What arguments are available to RandomForestRegressor?
How does the above plot change if you change these arguments?
These class-level arguments are known as hyperparameters, and we will discuss later how you to select hyperparameters in the model validation section.
Unsupervised Learning: Dimensionality Reduction and Clustering
Unsupervised Learning addresses a different sort of problem. Here the data has no labels,
and we are interested in finding similarities between the objects in question. In a sense,
you can think of unsupervised learning as a means of discovering labels from the data itself.
Unsupervised learning comprises tasks such as dimensionality reduction, clustering, and
density estimation. For example, in the iris data discussed above, we can used unsupervised
methods to determine combinations of the measurements which best display the structure of the
data. As we'll see below, such a projection of the data can be used to visualize the
four-dimensional dataset in two dimensions. Some more involved unsupervised learning problems are:
given detailed observations of distant galaxies, determine which features or combinations of
features best summarize the information.
given a mixture of two sound sources (for example, a person talking over some music),
separate the two (this is called the blind source separation problem).
given a video, isolate a moving object and categorize in relation to other moving objects which have been seen.
Sometimes the two may even be combined: e.g. Unsupervised learning can be used to find useful
features in heterogeneous data, and then these features can be used within a supervised
framework.
Dimensionality Reduction: PCA
Principle Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance.
Consider the iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it:
End of explanation
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred,
cmap='RdYlBu');
Explanation: Clustering: K-means
Clustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.
Note that these clusters will uncover relevent hidden structure of the data only if the criterion used highlights it.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
X, y = iris.data, iris.target
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X, y)
y_pred = clf.predict(X)
print(np.all(y == y_pred))
Explanation: Recap: Scikit-learn's estimator interface
Scikit-learn strives to have a uniform interface across all methods,
and we'll see examples of these below. Given a scikit-learn estimator
object named model, the following methods are available:
Available in all Estimators
model.fit() : fit training data. For supervised learning applications,
this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)).
For unsupervised learning applications, this accepts only a single argument,
the data X (e.g. model.fit(X)).
Available in supervised estimators
model.predict() : given a trained model, predict the label of a new set of data.
This method accepts one argument, the new data X_new (e.g. model.predict(X_new)),
and returns the learned label for each object in the array.
model.predict_proba() : For classification problems, some estimators also provide
this method, which returns the probability that a new observation has each categorical label.
In this case, the label with the highest probability is returned by model.predict().
model.score() : for classification or regression problems, most (all?) estimators implement
a score method. Scores are between 0 and 1, with a larger score indicating a better fit.
Available in unsupervised estimators
model.predict() : predict labels in clustering algorithms.
model.transform() : given an unsupervised model, transform new data into the new basis.
This also accepts one argument X_new, and returns the new representation of the data based
on the unsupervised model.
model.fit_transform() : some estimators implement this method,
which more efficiently performs a fit and a transform on the same input data.
Model Validation
An important piece of machine learning is model validation: that is, determining how well your model will generalize from the training data to future unlabeled data. Let's look at an example using the nearest neighbor classifier. This is a very simple classifier: it simply stores all training data, and for any unknown quantity, simply returns the label of the closest training point.
With the iris data, it very easily returns the correct prediction for each of the input points:
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y, y_pred))
Explanation: A more useful way to look at the results is to view the confusion matrix, or the matrix showing the frequency of inputs and outputs:
End of explanation
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=23)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
print(confusion_matrix(ytest, ypred))
Xtest.shape[0] / len(X)
Explanation: For each class, all 50 training samples are correctly identified. But this does not mean that our model is perfect! In particular, such a model generalizes extremely poorly to new data. We can simulate this by splitting our data into a training set and a testing set. Scikit-learn contains some convenient routines to do this:
End of explanation
from IPython.display import Image
Image("http://scikit-learn.org/dev/_static/ml_map.png")
Explanation: This paints a better picture of the true performance of our classifier: apparently there is some confusion between the second and third species, which we might anticipate given what we've seen of the data above.
This is why it's extremely important to use a train/test split when evaluating your models. We'll go into more depth on model evaluation later in this tutorial.
Flow Chart: How to Choose your Estimator
This is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!
End of explanation
from sklearn import datasets
digits = datasets.load_digits()
digits.images.shape
Explanation: Original source on the scikit-learn website
Quick Application: Optical Character Recognition
To demonstrate the above principles on a more interesting problem, let's consider OCR (Optical Character Recognition) – that is, recognizing hand-written digits.
In the wild, this problem involves both locating and identifying characters in an image. Here we'll take a shortcut and use scikit-learn's set of pre-formatted digits, which is built-in to the library.
Loading and visualizing the digits data
We'll use scikit-learn's data access interface and take a look at this data:
End of explanation
fig, axes = plt.subplots(10, 10, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(digits.target[i]),
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
Explanation: Let's plot a few of these:
End of explanation
# The images themselves
print(digits.images.shape)
print(digits.images[0])
# The data for use in our algorithms
print(digits.data.shape)
print(digits.data[0])
# The target label
print(digits.target)
Explanation: Here the data is simply each pixel value within an 8x8 grid:
End of explanation
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
data_projected = iso.fit_transform(digits.data)
data_projected.shape
plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target,
edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('nipy_spectral', 10));
plt.colorbar(label='digit label', ticks=range(10))
plt.clim(-0.5, 9.5)
Explanation: So our data have 1797 samples in 64 dimensions.
Unsupervised Learning: Dimensionality Reduction
We'd like to visualize our points within the 64-dimensional parameter space, but it's difficult to plot points in 64 dimensions!
Instead we'll reduce the dimensions to 2, using an unsupervised method.
Here, we'll make use of a manifold learning algorithm called Isomap, and transform the data to two dimensions.
End of explanation
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(digits.data, digits.target,
random_state=2)
print(Xtrain.shape, Xtest.shape)
Explanation: We see here that the digits are fairly well-separated in the parameter space; this tells us that a supervised classification algorithm should perform fairly well. Let's give it a try.
Classification on Digits
Let's try a classification task on the digits. The first thing we'll want to do is split the digits into a training and testing sample:
End of explanation
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(penalty='l2')
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
Explanation: Let's use a simple logistic regression which (despite its confusing name) is a classification algorithm:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(ytest, ypred)
Explanation: We can check our classification accuracy by comparing the true values of the test set to the predictions:
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(ytest, ypred))
plt.imshow(np.log(confusion_matrix(ytest, ypred)),
cmap='Blues', interpolation='nearest')
plt.grid(False)
plt.ylabel('true')
plt.xlabel('predicted');
Explanation: This single number doesn't tell us where we've gone wrong: one nice way to do this is to use the confusion matrix
End of explanation
fig, axes = plt.subplots(10, 10, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
ax.imshow(Xtest[i].reshape(8, 8), cmap='binary')
ax.text(0.05, 0.05, str(ypred[i]),
transform=ax.transAxes,
color='green' if (ytest[i] == ypred[i]) else 'red')
ax.set_xticks([])
ax.set_yticks([])
Explanation: We might also take a look at some of the outputs along with their predicted labels. We'll make the bad labels red:
End of explanation
<END_TASK> |
15,937 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is "twice as better" as Jacobi.
The Making of a Preconditioner ---Vectorized Version
This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction.
The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver.
1. Multigrid algorithm
We need some terminology before going further.
- Approximation
Step1: 1.1 Smoothing operator
This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual.
Step2: 1.2 Interpolation Operator
This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f.
Step3: 1.3 Restriction
This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity
Step4: 1.4 Bottom Solver
Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly.
1.5 V-cycle
Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact.
Step5: Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.
Step6: Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.
The coarsest problem is going to have a 2-by-2 grid.
Step7: Now we can call the solver
Step8: True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.
This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely.
1.6 Full Multi-Grid
We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below.
Step9: Lets call the FMG solver for the same problem
Step10: It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$
2. Stationary iterative methods as preconditioners
A preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product.
Any stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want.
This allows us to use any blackbox stationary iterative method as a preconditioner
To repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method.
We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method.
The matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner.
Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver.
Below is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep.
Step11: Let us define the Poisson matrix also as a LinearOperator
Step12: The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation.
The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations
Step13: Lets look at what happens with and without the preconditioner.
Step14: Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG | <ASSISTANT_TASK:>
Python Code:
import numpy as np
Explanation: This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is "twice as better" as Jacobi.
The Making of a Preconditioner ---Vectorized Version
This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction.
The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver.
1. Multigrid algorithm
We need some terminology before going further.
- Approximation:
- Residual:
- Exact solution (of the discrete problem)
- Correction
This is a geometric multigrid algorithm, where a series of nested grids are used. There are four parts to a multigrid algorithm
- Smoothing Operator (a.k.a Relaxation)
- Restriction Operator
- Interpolation Operator (a.k.a Prolongation Operator)
- Bottom solver
We will define each of these in sequence. These operators act of different quantities that are stored at the cell center. We will get to exactly what later on. To begin import numpy.
End of explanation
def Jacrelax(nx,ny,u,f,iters=1):
'''
under-relaxed Jacobi iteration
'''
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2; Ay=1.0/dy**2
Ap=1.0/(2.0*(Ax+Ay))
#Dirichlet BC
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
for it in range(iters):
u[1:nx+1,1:ny+1] = 0.8*Ap*(Ax*(u[2:nx+2,1:ny+1] + u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2] + u[1:nx+1,0:ny])
- f[1:nx+1,1:ny+1])+0.2*u[1:nx+1,1:ny+1]
#Dirichlet BC
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
res=np.zeros([nx+2,ny+2])
res[1:nx+1,1:ny+1]=f[1:nx+1,1:ny+1]-(( Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
- 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1]))
return u,res
Explanation: 1.1 Smoothing operator
This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual.
End of explanation
def prolong(nx,ny,v):
'''
interpolate 'v' to the fine grid
'''
v_f=np.zeros([2*nx+2,2*ny+2])
v_f[1:2*nx:2 ,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[0:nx ,0:ny ]
v_f[2:2*nx+1:2,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[2:nx+2,0:ny ]
v_f[1:2*nx:2 ,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[0:nx ,2:ny+2]
v_f[2:2*nx+1:2,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[2:nx+2,2:ny+2]
return v_f
Explanation: 1.2 Interpolation Operator
This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f.
End of explanation
def restrict(nx,ny,v):
'''
restrict 'v' to the coarser grid
'''
v_c=np.zeros([nx+2,ny+2])
v_c[1:nx+1,1:ny+1]=0.25*(v[1:2*nx:2,1:2*ny:2]+v[1:2*nx:2,2:2*ny+1:2]+v[2:2*nx+1:2,1:2*ny:2]+v[2:2*nx+1:2,2:2*ny+1:2])
return v_c
Explanation: 1.3 Restriction
This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity
End of explanation
def V_cycle(nx,ny,num_levels,u,f,level=1):
if(level==num_levels):#bottom solve
u,res=Jacrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Relax Au=f on this grid
u,res=Jacrelax(nx,ny,u,f,iters=1)
#Step 2: Restrict residual to coarse grid
res_c=restrict(nx//2,ny//2,res)
#Step 3:Solve A e_c=res_c on the coarse grid. (Recursively)
e_c=np.zeros_like(res_c)
e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1)
#Step 4: Interpolate(prolong) e_c to fine grid and add to u
u+=prolong(nx//2,ny//2,e_c)
#Step 5: Relax Au=f on this grid
u,res=Jacrelax(nx,ny,u,f,iters=1)
return u,res
Explanation: 1.4 Bottom Solver
Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly.
1.5 V-cycle
Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact.
End of explanation
#analytical solution
def Uann(x,y):
return (x**3-x)*(y**3-y)
#RHS corresponding to above
def source(x,y):
return 6*x*y*(x**2+ y**2 - 2)
Explanation: Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.
End of explanation
#input
max_cycles = 30
nlevels = 6
NX = 2*2**(nlevels-1)
NY = 2*2**(nlevels-1)
tol = 1e-15
#the grid has one layer of ghost cellss
uann=np.zeros([NX+2,NY+2])#analytical solution
u =np.zeros([NX+2,NY+2])#approximation
f =np.zeros([NX+2,NY+2])#RHS
#calcualte the RHS and exact solution
DX=1.0/NX
DY=1.0/NY
xc=np.linspace(0.5*DX,1-0.5*DX,NX)
yc=np.linspace(0.5*DY,1-0.5*DY,NY)
XX,YY=np.meshgrid(xc,yc,indexing='ij')
uann[1:NX+1,1:NY+1]=Uann(XX,YY)
f[1:NX+1,1:NY+1] =source(XX,YY)
Explanation: Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.
The coarsest problem is going to have a 2-by-2 grid.
End of explanation
print('mgd2d.py solver:')
print('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels)
for it in range(1,max_cycles+1):
u,res=V_cycle(NX,NY,nlevels,u,f)
rtol=np.max(np.max(np.abs(res)))
if(rtol<tol):
break
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print(' cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error))))
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
Explanation: Now we can call the solver
End of explanation
def FMG(nx,ny,num_levels,f,nv=1,level=1):
if(level==num_levels):#bottom solve
u=np.zeros([nx+2,ny+2])
u,res=Jacrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Restrict the rhs to a coarse grid
f_c=restrict(nx//2,ny//2,f)
#Step 2: Solve the coarse grid problem using FMG
u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1)
#Step 3: Interpolate u_c to the fine grid
u=prolong(nx//2,ny//2,u_c)
#step 4: Execute 'nv' V-cycles
for _ in range(nv):
u,res=V_cycle(nx,ny,num_levels-level,u,f)
return u,res
Explanation: True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.
This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely.
1.6 Full Multi-Grid
We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below.
End of explanation
print('mgd2d.py FMG solver:')
print('NX:',NX,', NY:',NY,', levels: ',nlevels)
u,res=FMG(NX,NY,nlevels,f,nv=1)
rtol=np.max(np.max(np.abs(res)))
print(' FMG L_inf(res.)= ',rtol)
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
Explanation: Lets call the FMG solver for the same problem
End of explanation
from scipy.sparse.linalg import LinearOperator,bicgstab,cg
def MGVP(nx,ny,num_levels):
'''
Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can
be passed to Krylov solvers as a preconditioner.
'''
def pc_fn(v):
u =np.zeros([nx+2,ny+2])
f =np.zeros([nx+2,ny+2])
f[1:nx+1,1:ny+1] =v.reshape([nx,ny]) #in practice this copying can be avoived
#perform one V cycle
u,res=V_cycle(nx,ny,num_levels,u,f)
return u[1:nx+1,1:ny+1].reshape(v.shape)
M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn)
return M
Explanation: It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$
2. Stationary iterative methods as preconditioners
A preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product.
Any stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want.
This allows us to use any blackbox stationary iterative method as a preconditioner
To repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method.
We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method.
The matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner.
Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver.
Below is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep.
End of explanation
def Laplace(nx,ny):
'''
Action of the Laplace matrix on a vector v
'''
def mv(v):
u =np.zeros([nx+2,ny+2])
u[1:nx+1,1:ny+1]=v.reshape([nx,ny])
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2; Ay=1.0/dy**2
#BCs. Needs to be generalized!
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
ut = (Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
- 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1])
return ut.reshape(v.shape)
A = LinearOperator((nx*ny,nx*ny), matvec=mv)
return A
Explanation: Let us define the Poisson matrix also as a LinearOperator
End of explanation
def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None):
num_iters = 0
def callback(xk):
nonlocal num_iters
num_iters+=1
x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M)
return x,status,num_iters
Explanation: The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation.
The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations
End of explanation
A = Laplace(NX,NY)
#Exact solution and RHS
uex=np.random.rand(NX*NY,1)
b=A*uex
#Multigrid Preconditioner
M=MGVP(NX,NY,nlevels)
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
Explanation: Lets look at what happens with and without the preconditioner.
End of explanation
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
Explanation: Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG:
End of explanation
<END_TASK> |
15,938 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Lab 1
Step1: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables)
Step2: Question
Step3: Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
Step4: Previewing sample rows of data values
In the BigQuery UI, find the Resources panel and search for catalog_sales. You may need to add the qwiklabs-resources project to your UI by clicking + Add Data -> Pin a project and entering qwiklabs-resources
Click on the catalog_sales table name for the tpcds_2t_baseline dataset under qwiklabs-resources
Question
- How many rows are in the table?
- How large is the table in TB?
Hint
Step5: A note on our data
Step6: Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip
Step7: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the bq command line interface. Then we will get even more detail on our most recent job with the bq show command. Be sure to replace the job id with your own.
Step9: Looking at the job statistics we can see our most recent query hit cache
- cacheHit
Step11: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
Step12: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below
Step13: Use the BigQuery Data Transfer Service to copy an existing dataset
Enable the BigQuery Data Transfer Service API
Navigate to the BigQuery console and the existing qwiklabs-resources dataset
Click Copy Dataset
In the pop-up, choose your project name and the newly created dataset name from the previous step
Click Copy
Wait for the transfer to complete
Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the project-id ahead of the dataset name in the FROM clause, BigQuery will assume your default project.
Step14: Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that
Step15: Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new perf BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran
Step16: And finally, the overall statistics for the entire test
Step17: Benchmarking all 99 queries
As we mentioned before, we already ran all 99 queries and recorded the results and made them available for you to query in a public table
Step18: And the results of the complete test | <ASSISTANT_TASK:>
Python Code:
%%bigquery
SELECT
dataset_id,
table_id,
-- Convert bytes to GB.
ROUND(size_bytes/pow(10,9),2) as size_gb,
-- Convert UNIX EPOCH to a timestamp.
TIMESTAMP_MILLIS(creation_time) AS creation_time,
TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,
row_count,
CASE
WHEN type = 1 THEN 'table'
WHEN type = 2 THEN 'view'
ELSE NULL
END AS type
FROM
`qwiklabs-resources.tpcds_2t_baseline.__TABLES__`
ORDER BY size_gb DESC
Explanation: Lab 1: Explore and Benchmark a BigQuery Dataset for Performance
Overview
In this lab you will take an existing 2TB+ TPC-DS benchmark dataset and learn the data warehouse optimization methods you can apply to the dataset in BigQuery to improve performance.
What you'll do
In this lab, you will learn how to:
Use BigQuery to access and query the TPC-DS benchmark dataset
Run pre-defined queries to establish baseline performance benchmarks
Prerequisites
This is an advanced level SQL lab. Before taking it, you should have experience with SQL. Familiarity with BigQuery is also highly recommended. If you need to get up to speed in these areas, you should take this Data Analyst series of labs first:
Quest: BigQuery for Data Analysts
Once you're ready, scroll down to learn about the services you will be using and how to properly set up your lab environment.
BigQuery
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without managing infrastructure or needing a database administrator. BigQuery uses SQL and takes advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
TPC-DS Background
In order to benchmark the performance of a data warehouse we first must get tables and data to run queries against. There is a public organization, TPC, that provides large benchmarking datasets to companies explicitly for this purpose. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users.
The TPC-DS Dataset we will be using comprises of 25 tables and 99 queries that simulate common data analysis tasks. View the full documentation here.
Exploring TPC-DS in BigQuery
The TPC-DS tables have been loaded into BigQuery and you will explore ways to optimize the performance of common queries by using BigQuery data warehousing best practices. We have limited the size to 2TB for the timing of this lab but the dataset itself can be expanded as needed.
Note: The TPC Benchmark and TPC-DS are trademarks of the Transaction Processing Performance Council (http://www.tpc.org). The Cloud DW benchmark is derived from the TPC-DS Benchmark and as such is not comparable to published TPC-DS results.
Exploring the Schema with SQL
Question:
- How many tables are in the dataset?
- What is the name of the largest table (in GB)? How many rows does it have?
End of explanation
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
Explanation: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables):
These systems are driven by the core functions of our retail business. As you can see, our store accepts sales from online (web), mail-order (catalog), and in-store. The business must keep track of inventory and can offer promotional discounts on items sold.
Exploring all available columns of data
Question:
- How many columns of data are in the entire dataset (all tables)?
End of explanation
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
WHERE
is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL
Explanation: Question:
- Are any of the columns of data in this baseline dataset partitioned or clustered?
End of explanation
%%bigquery
SELECT
COUNT(column_name) AS column_count,
table_name
FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
GROUP BY table_name
ORDER BY column_count DESC, table_name
Explanation: Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
End of explanation
%%bigquery --verbose
SELECT
cs_item_sk,
COUNT(cs_order_number) AS total_orders,
SUM(cs_quantity) AS total_quantity,
SUM(cs_ext_sales_price) AS total_revenue,
SUM(cs_net_profit) AS total_profit
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
GROUP BY
cs_item_sk
ORDER BY
total_orders DESC
LIMIT
100
Explanation: Previewing sample rows of data values
In the BigQuery UI, find the Resources panel and search for catalog_sales. You may need to add the qwiklabs-resources project to your UI by clicking + Add Data -> Pin a project and entering qwiklabs-resources
Click on the catalog_sales table name for the tpcds_2t_baseline dataset under qwiklabs-resources
Question
- How many rows are in the table?
- How large is the table in TB?
Hint: Use the Details button in the web UI to quickly access table metadata
Question:
- Preview the data and find the Catalog Sales Extended Sales Price cs_ext_sales_price field (which is calculated based on product quantity * sales price)
- Are there any missing data values for Catalog Sales Quantity (cs_quantity)?
- Are there any missing values for cs_ext_ship_cost? For what type of product could this be expected? (Digital products)
Create an example sales report
Write a query that shows key sales stats for each item sold from the Catalog and execute it in the BigQuery UI:
- total orders
- total unit quantity
- total revenue
- total profit
- sorted by total orders highest to lowest, limit 100
End of explanation
!head --lines=50 'sql/example_baseline_queries.sql'
Explanation: A note on our data: The TPC-DS benchmark allows data warehouse practicioners to generate any volume of data programmatically. Since the rows of data are system generated, they may not make the most sense in a business context (like why are we selling our top product at such a huge profit loss!).
The good news is that to benchmark our performance we care most about the volume of rows and columns to run our benchmark against.
Analyzing query performance
Click on Execution details
Refer to the chart below (which should be similar to your results) and answer the following questions.
Question
- How long did it take the query to run? 5.1s
- How much data in GB was processed? 150GB
- How much slot time was consumed? 1hr 24min
- How many rows were input? 2,881,495,086
- How many rows were output as the end result (before the limit)? 23,300
- What does the output rows mean in the context of our query? (23,300 unique cs_item_sk)
Side note: Slot Time
We know the query took 5.1 seconds to run so what does the 1hr 24 min slot time metric mean?
Inside of the BigQuery service are lots of virtual machines that massively process your data and query logic in parallel. These workers, or "slots", work together to process a single query job really quickly. For accounts with on-demand pricing, you can have up to 2,000 slots.
So say we had 30 minutes of slot time or 1800 seconds. If the query took 20 seconds in total to run,
but it was 1800 seconds worth of work, how many workers at minimum worked on it?
1800/20 = 90
And that's assuming each worker instantly had all the data it needed (no shuffling of data between workers) and was at full capacity for all 20 seconds!
In reality, workers have a variety of tasks (waiting for data, reading it, performing computations, and writing data)
and also need to compare notes with eachother on what work was already done on the job. The good news for you is
that you don't need to worry about optimizing these workers or the underlying data to run perfectly in parallel. That's why BigQuery is a managed service -- there's an entire team dedicated to hardware and data storage optimization.
In case you were wondering, the worker limit for your project is 2,000 slots at once.
Running a performance benchmark
To performance benchmark our data warehouse in BigQuery we need to create more than just a single SQL report. The good news is the TPC-DS dataset ships with 99 standard benchmark queries that we can run and log the performance outcomes.
In this lab, we are doing no adjustments to the existing data warehouse tables (no partitioning, no clustering, no nesting) so we can establish a performance benchmark to beat in future labs.
Viewing the 99 pre-made SQL queries
We have a long SQL file with 99 standard queries against this dataset stored in our /sql/ directory.
Let's view the first 50 lines of those baseline queries to get familiar with how we will be performance benchmarking our dataset.
End of explanation
%%bigquery --verbose
# start query 1 in stream 0 using template query96.tpl
select count(*)
from `qwiklabs-resources.tpcds_2t_baseline.store_sales` as store_sales
,`qwiklabs-resources.tpcds_2t_baseline.household_demographics` as household_demographics
,`qwiklabs-resources.tpcds_2t_baseline.time_dim` as time_dim,
`qwiklabs-resources.tpcds_2t_baseline.store` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
Explanation: Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip: You can use the --verbose flag in %%bigquery magics to return the job and completion time.
End of explanation
!bq ls -j -a -n 5
!bq show --format=prettyjson -j 612a4b28-cb5c-4e0b-ad5b-ebd51c3b2439
Explanation: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the bq command line interface. Then we will get even more detail on our most recent job with the bq show command. Be sure to replace the job id with your own.
End of explanation
%%bash
bq query \
--dry_run \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
# Convert bytes to GB
132086388641 / 1e+9
Explanation: Looking at the job statistics we can see our most recent query hit cache
- cacheHit: true and therefore
- totalBytesProcessed: 0.
While this is great in normal uses for BigQuery (you aren't charged for queries that hit cache) it kind of ruins our performance test. While cache is super useful we want to disable it for testing purposes.
Disabling Cache and Dry Running Queries
As of the time this lab was created, you can't pass a flag to %%bigquery iPython notebook magics to disable cache or to quickly see the amount of data processed. So we will use the traditional bq command line interface in bash.
First we will do a dry run of the query without processing any data just to see how many bytes of data would be processed. Then we will remove that flag and ensure nouse_cache is set to avoid hitting cache as well.
End of explanation
%%bash
bq query \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
Explanation: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
End of explanation
%%bash
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create
## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist
datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME"
bq --location=US mk --dataset \
--description 'Benchmark Dataset' \
$PROJECT:$BENCHMARK_DATASET_NAME
echo "\nHere are your current datasets:"
bq ls
fi
Explanation: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below:
It's a matter of preference whether you do your work in the Web UI or the command line -- each has it's advantages.
One major advantage of using the bq command line interface is the ability to create a script that will run the remaining 98 benchmark queries for us and log the results.
Copy the qwiklabs-resources dataset into your own GCP project
We will use the new BigQuery Transfer Service to quickly copy our large dataset from the qwiklabs-resources GCP project into your own so you can perform the benchmarking.
Create a new baseline dataset in your project
End of explanation
%%bigquery
SELECT COUNT(*) AS store_transaction_count
FROM tpcds_2t_baseline.store_sales
Explanation: Use the BigQuery Data Transfer Service to copy an existing dataset
Enable the BigQuery Data Transfer Service API
Navigate to the BigQuery console and the existing qwiklabs-resources dataset
Click Copy Dataset
In the pop-up, choose your project name and the newly created dataset name from the previous step
Click Copy
Wait for the transfer to complete
Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the project-id ahead of the dataset name in the FROM clause, BigQuery will assume your default project.
End of explanation
%%bash
# runs the SQL queries from the TPCDS benchmark
# Pull the current Google Cloud Platform project name
BQ_DATASET="tpcds_2t_baseline" # let's start by benchmarking our baseline dataset
QUERY_FILE_PATH="./sql/example_baseline_queries.sql" # the full test is on 99_baseline_queries but that will take 80+ mins to run
IFS=";"
# create perf table to keep track of run times for all 99 queries
printf "\033[32;1m Housekeeping tasks... \033[0m\n\n";
printf "Creating a reporting table perf to track how fast each query runs...";
perf_table_ddl="CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)"
bq rm -f $BQ_DATASET.perf
bq query --nouse_legacy_sql $perf_table_ddl
start=$(date +%s)
index=0
for select_stmt in $(<$QUERY_FILE_PATH)
do
# run the test until you hit a line with the string 'END OF BENCHMARK' in the file
if [[ "$select_stmt" == *'END OF BENCHMARK'* ]]; then
break
fi
printf "\n\033[32;1m Let's benchmark this query... \033[0m\n";
printf "$select_stmt";
SECONDS=0;
bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test
duration=$SECONDS
# get current timestamp in milliseconds
ran_on=$(date +%s)
index=$((index+1))
printf "\n\033[32;1m Here's how long it took... \033[0m\n\n";
echo "Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds."
printf "\n\033[32;1m Writing to our benchmark table... \033[0m\n\n";
insert_stmt="insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)"
printf "$insert_stmt"
bq query --nouse_legacy_sql $insert_stmt
done
end=$(date +%s)
printf "Benchmark test complete"
Explanation: Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that:
1. Accepts a BigQuery dataset to benchmark
2. Accepts a list of semi-colon separated queries to run
3. Loops through each query and calls the bq query command
4. Records the execution time into a separate BigQuery performance table perf
Execute the below statement and follow along with the results as you benchmark a few example queries (don't worry, we've already ran the full 99 recently so you won't have to).
After executing, wait 1-2 minutes for the benchmark test to complete
End of explanation
%%bigquery
SELECT * FROM tpcds_2t_baseline.perf
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
ORDER BY ran_on
Explanation: Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new perf BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
MAX(performance_test_num) AS latest_performance_test_num,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec
FROM
tpcds_2t_baseline.perf
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
Explanation: And finally, the overall statistics for the entire test:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(performance_test_num) AS test_date,
query_num,
TIMESTAMP_SECONDS(ran_on) AS query_ran_on,
TIMESTAMP_SECONDS(ran_on + elapsed_time_sec) AS query_completed_on,
elapsed_time_sec
FROM `qwiklabs-resources.tpcds_2t_baseline.perf` # public table
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM `qwiklabs-resources.tpcds_2t_baseline.perf`)
ORDER BY ran_on
Explanation: Benchmarking all 99 queries
As we mentioned before, we already ran all 99 queries and recorded the results and made them available for you to query in a public table:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
ROUND(SUM(elapsed_time_sec)/60,2) AS total_time_min,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec,
ROUND(AVG(elapsed_time_sec),2) AS avg_query_time_sec
FROM
`qwiklabs-resources.tpcds_2t_baseline.perf`
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM `qwiklabs-resources.tpcds_2t_baseline.perf`)
Explanation: And the results of the complete test:
End of explanation
<END_TASK> |
15,939 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Задание 1
Написать функцию на языке Python, формирующую список всех исполнителей композиций жанра Рок. Список должен быть упорядочен в порядке убывания.
Step1: Задание 2
Написать функцию на языке Python, создающую таблицу Студентов Students(id, name, gpa). Ключ - id.
Step2: Проверим, что таблица создана
Step3: Задание 3
Для созданной выше функции реализовть возможность добавления списка студентов вида [['Ivanov', 1.2], ['Petrov', 2.3]].
ID новых студентов должно начинаться с максимального ID в таблице + 1. (Например, если в таблице максимальный ID - 10, то у Петрова должно быть - 11, у Иванова - 12). Функция должна предполагать вставки списка любой ограниченной длины.
Получаем max(id) + 1
Step4: Задание 4
Добавить таблицу Факультетов Faculties(fac_id, name). Для таблицы Students добавить новое поле fac_id с внещним ключом на таблицу факультетов.
Step5: Задание 5
Написать функцию, осуществляющую обновления всех факультетов у каждого студента. Функция должна выводить информацию о студенту, приглашение на вход для обновления факультета и обновление факультета. При возникновение вставки функция должна обрабатывать исключение, и продолжать работу.
Для начала добавим в таблицу факультетов пару записей
Step6: Задание 6
Написать функцию, осущетсвляющую перевод части учеников на новый факультет. На входе | <ASSISTANT_TASK:>
Python Code:
def task1():
cursor = db.cursor()
cursor.execute('''
select distinct ar.Name
from tracks t
inner join albums al
on t.albumid = al.albumid
inner join artists ar
on al.artistid = ar.artistid
inner join genres g
on t.genreid = g.genreid
where g.name = 'Rock'
''')
ar = cursor.fetchall()
return [x[0] for x in ar]
task1()
Explanation: Задание 1
Написать функцию на языке Python, формирующую список всех исполнителей композиций жанра Рок. Список должен быть упорядочен в порядке убывания.
End of explanation
def task2():
cursor=db.cursor()
cursor.execute('''
DROP TABLE IF EXISTS students''')
cursor.execute('''
CREATE TABLE Students(id INTEGERE PRIMARY KEY, name TEXT, gpa NUMBER(10,2))''')
db.commit()
task2()
Explanation: Задание 2
Написать функцию на языке Python, создающую таблицу Студентов Students(id, name, gpa). Ключ - id.
End of explanation
%%sql
select *
from students
Explanation: Проверим, что таблица создана
End of explanation
%%sql
select coalesce(max(id)+1, 1) as new_id from students
def task3(l_students):
cursor = db.cursor()
cursor.execute( '''
SELECT COALESCE(MAX(ID)+1, 1) AS new_id FROM students''')
new_id = cursor.fetchone()[0]
for i, student in enumerate(l_students):
cursor.execute('''
INSERT INTO Students(id, name, gpa) VALUES(?,?,?)''', (new_id + i, student[0], student[1]))
db.commit()
task3([['Ivanov', 3.2], ['Petrov', 4.2]])
%%sql
SELECT *
FROM Students
Explanation: Задание 3
Для созданной выше функции реализовть возможность добавления списка студентов вида [['Ivanov', 1.2], ['Petrov', 2.3]].
ID новых студентов должно начинаться с максимального ID в таблице + 1. (Например, если в таблице максимальный ID - 10, то у Петрова должно быть - 11, у Иванова - 12). Функция должна предполагать вставки списка любой ограниченной длины.
Получаем max(id) + 1
End of explanation
def task4():
cursor = db.cursor()
cursor.execute('''DROP TABLE IF EXISTS faculties''')
cursor.execute('''CREATE TABLE faculties(fac_id INTEGER PRIMARY KEY, name TEXT)''')
cursor.execute('''ALTER TABLE students ADD fac_id INTEGER REFERENCES faculties(fac_id)''')
db.commit()
task4()
%%sql
select *
from faculties
%%sql
select *
from Students
Explanation: Задание 4
Добавить таблицу Факультетов Faculties(fac_id, name). Для таблицы Students добавить новое поле fac_id с внещним ключом на таблицу факультетов.
End of explanation
%%sql
INSERT INTO faculties(fac_id, name)
VALUES (1, 'IT'), (2, 'KIB'), (3, 'Math')
%%sql
select *
from faculties
a = input('1 {}', '2')
def task5():
cursor = db.cursor()
cursor.execute('Select id, name, gpa from Students')
a = cursor.fetchall()
for x in a:
print("Введите факультет для студента {} с id = {} и gpa = {}".format(x[1], x[0], x[2]))
fac_name = input()
cursor.execute("SELECT fac_id from faculties where name = ?", (fac_name, ))
# Проверяем есть ли такая запись
try:
fac_id = cursor.fetchone()[0]
except TypeError:
continue
cursor.execute("Update students set fac_id = ? where id = ?", (fac_id, x[0],))
db.commit()
task5()
%%sql
SELECT *
FROM students
task5()
%%sql
SELECT *
FROM Students
Explanation: Задание 5
Написать функцию, осуществляющую обновления всех факультетов у каждого студента. Функция должна выводить информацию о студенту, приглашение на вход для обновления факультета и обновление факультета. При возникновение вставки функция должна обрабатывать исключение, и продолжать работу.
Для начала добавим в таблицу факультетов пару записей
End of explanation
def task6(fac_name, l_id):
cursor = db.cursor()
cursor.execute( '''
SELECT COALESCE(MAX(fac_id)+1, 1) AS new_fac_id FROM faculties''')
new_id = cursor.fetchone()[0]
cursor.execute('''
INSERT INTO faculties(fac_id, name) VALUES(?,?)''', (new_id, fac_name,))
for x in l_id:
cursor.execute('''
Update students set fac_id = ? where id = ?''', (new_id, x, ))
db.commit()
task6('Hist', [1])
%%sql
select *
from students
Explanation: Задание 6
Написать функцию, осущетсвляющую перевод части учеников на новый факультет. На входе: Имя факультета, Список студентов для перехода на новый факультет. На выходе Добавление новой записи в таблицу факультетов, Обновление записей в таблице студентов.
End of explanation
<END_TASK> |
15,940 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Bayesian Hierarchical Stacking
Step1: 1. Exploratory Data Analysis <a class="anchor" id="1"></a>
The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?
We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!
But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are
Step2: Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
Step4: 2. Prepare 6 different candidate models <a class="anchor" id="2"></a>
2.1 Feature Engineering <a class="anchor" id="2.1"></a>
First, let's add a few new columns
Step5: 2.2 Training <a class="anchor" id="2.2"></a>
Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
Step6: 2.3 Estimate leave-one-out cross-validated score for each training point <a class="anchor" id="2.3"></a>
Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using LOO.
Step7: 3. Bayesian Hierarchical Stacking <a class="anchor" id="3"></a>
3.1 Prepare stacking datasets <a class="anchor" id="3.1"></a>
To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following
Step9: 3.2 Define stacking model <a class="anchor" id="3.2"></a>
What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be
Step10: We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.
Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see their docs for details).
Step11: 4. Evaluate on test set <a class="anchor" id="4"></a>
4.1 Stack predictions <a class="anchor" id="4.1"></a>
Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.
We decided we'd do this in three ways
Step12: 4.2 Compare methods <a class="anchor" id="4.2"></a>
Let's compare the negative log predictive density scores on the test set (note - lower is better) | <ASSISTANT_TASK:>
Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.2")
%matplotlib inline
Explanation: Bayesian Hierarchical Stacking: Well Switching Case Study
<figure>
<img src="https://i.imgur.com/CiUmZKx.jpeg" width="700px"/>
<figcaption>Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw.</figcaption>
</figure>
Table of Contents
Intro
1. Exploratory Data Analysis
2. Prepare 6 Different Models
2.1 Feature Engineering
2.2 Training
3. Bayesian Hierarchical Stacking
3.1 Prepare stacking datasets
3.2 Define stacking model
4. Evaluate on test set
4.1 Stack predictions
4.2 Compare methods
Conclusion
References
Intro <a class="anchor" id="1"></a>
Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:
- choose the best model based on cross-validation;
- average the models, using weights based on cross-validation scores.
In the paper Bayesian hierarchical stacking: Some models are (somewhere) useful, a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.
Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found here.
End of explanation
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
Explanation: 1. Exploratory Data Analysis <a class="anchor" id="1"></a>
The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?
We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!
But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:
switch: whether a household switched to another well;
arsenic: level of arsenic in drinking water;
educ: level of education of "head of household";
dist100: distance to nearest safe-drinking well;
assoc: whether the household participates in any community activities.
End of explanation
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
Explanation: Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
End of explanation
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
Explanation: 2. Prepare 6 different candidate models <a class="anchor" id="2"></a>
2.1 Feature Engineering <a class="anchor" id="2.1"></a>
First, let's add a few new columns:
- edu0: whether educ is 0,
- edu1: whether educ is between 1 and 5,
- edu2: whether educ is between 6 and 11,
- edu3: whether educ is between 12 and 17,
- logarsenic: natural logarithm of arsenic,
- assoc_half: half of assoc,
- as_square: natural logarithm of arsenic, squared,
- as_third: natural logarithm of arsenic, cubed,
- dist100: dist divided by 100,
- intercept: just a columns of 1s.
We're going to start by fitting 6 different models to our train set:
logistic regression using intercept, arsenic, assoc, edu1, edu2, and edu3;
same as above, but with logarsenic instead of arsenic;
same as the first one, but with square and cubic features as well;
same as the first one, but with spline features derived from logarsenic as well;
same as the first one, but with spline features derived from dist100 as well;
same as the first one, but with educ instead of the binary edu variables.
End of explanation
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
Explanation: 2.2 Training <a class="anchor" id="2.2"></a>
Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
End of explanation
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
Explanation: 2.3 Estimate leave-one-out cross-validated score for each training point <a class="anchor" id="2.3"></a>
Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using LOO.
End of explanation
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
Explanation: 3. Bayesian Hierarchical Stacking <a class="anchor" id="3"></a>
3.1 Prepare stacking datasets <a class="anchor" id="3.1"></a>
To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature f, then replace it with the following two features:
f_l: f minus the median of f, clipped above at 0;
f_r: f minus the median of f, clipped below at 0;
End of explanation
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
Explanation: 3.2 Define stacking model <a class="anchor" id="3.2"></a>
What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:
$$ \sum_k W_{i, k}Pred_{i,k} $$
Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:
$$ W_i = \text{softmax}(X_\text{stacking}_i \cdot \beta), $$
where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.
Notice how, for the discrete features, a non-centered parametrisation is used. Also note that we only need to estimate K-1 columns of $\beta$, because the weights W_{i, k} will have to sum to 1 for each i.
End of explanation
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
Explanation: We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.
Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see their docs for details).
End of explanation
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
Explanation: 4. Evaluate on test set <a class="anchor" id="4"></a>
4.1 Stack predictions <a class="anchor" id="4.1"></a>
Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.
We decided we'd do this in three ways:
- Bayesian Hierarchical Stacking (bhs_pred);
- choosing the model with the best training set LOO score (model_selection_preds);
- fixed-weights stacking (fixed_weights_preds).
End of explanation
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
Explanation: 4.2 Compare methods <a class="anchor" id="4.2"></a>
Let's compare the negative log predictive density scores on the test set (note - lower is better):
End of explanation
<END_TASK> |
15,941 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Paris Saclay Center for Data Science
Titanic RAMP
Step1: Exploratory data analysis
Loading the data
Step2: The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information
Step3: About two thirds of the passengers perished in the event. A dummy classifier that systematically returns "0" would have an accuracy of 62%, higher than that of a random model.
Some plots
Features densities and co-evolution
A scatterplot matrix allows us to visualize
Step4: Non-linearly transformed data
The Fare variable has a very heavy tail. We can log-transform it.
Step5: Plot the bivariate distributions and marginals of two variables
Another way of visualizing relationships between variables is to plot their bivariate distributions.
Step6: Making predictions
A basic prediction workflow, using scikit-learn, will be presented below.
First, we will perform some simple preprocessing of our data
Step7: The preprocessor object created with make_column_transformer can be used in a scikit-learn pipeline. A pipeline assembles several steps together and can be used to cross validate an entire workflow. Generally, transformation steps are combined with a final estimator.
We will create a pipeline consisting of the preprocessor created above and a final estimator, LogisticRegression.
Step8: We can cross-validate our pipeline using cross_val_score. Below we will have specified cv=8 meaning KFold cross-valdiation splitting will be used, with 8 folds. The Area Under the Receiver Operating Characteristic Curve (ROC AUC) score is calculated for each split. The output score will be an array of 8 scores from each KFold. The score mean and standard of the 8 scores is printed at the end.
Step9: Testing
Once you have created a model with cross-valdiation scores you are happy with, you can test how well your model performs on the independent test data.
First we will read in our test data
Step10: Next we need to fit our pipeline on our training data
Step11: Now we can predict on our test data
Step12: Finally, we can calculate how well our model performed on the test data
Step13: RAMP submissions
For submitting to the RAMP site, you will need to write a submission.py file that defines a get_estimator function that returns a scikit-learn pipeline.
For example, to submit our basic example above, we would define our pipeline within the function and return the pipeline at the end. Remember to include all the necessary imports at the beginning of the file.
Step14: If you take a look at the sample submission in the directory submissions/starting_kit, you will find a file named submission.py, which has the above code in it.
You can test that the sample submission works by running ramp_test_submission in your terminal (ensure that ramp-workflow has been installed and you are in the titanic ramp kit directory). Alternatively, within this notebook you can run | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import glob
import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import pandas as pd
Explanation: Paris Saclay Center for Data Science
Titanic RAMP: survival prediction of Titanic passengers
Benoit Playe (Institut Curie/Mines ParisTech), Chloé-Agathe Azencott (Institut Curie/Mines ParisTech), Alex Gramfort (LTCI/Télécom ParisTech), Balázs Kégl (LAL/CNRS)
Introduction
This is an initiation project to introduce RAMP and get you to know how it works.
The goal is to develop prediction models able to identify people who survived from the sinking of the Titanic, based on gender, age, and ticketing information.
The data we will manipulate is from the Titanic kaggle challenge.
Requirements
numpy>=1.10.0
matplotlib>=1.5.0
pandas>=0.19.0
scikit-learn>=0.17 (different syntaxes for v0.17 and v0.18)
seaborn>=0.7.1
End of explanation
train_filename = 'data/train.csv'
data = pd.read_csv(train_filename)
y_train = data['Survived'].values
X_train = data.drop(['Survived', 'PassengerId'], axis=1)
X_train.head(5)
data.describe()
data.count()
Explanation: Exploratory data analysis
Loading the data
End of explanation
data.groupby('Survived').count()
Explanation: The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:
Predicting survival
The goal is to predict whether a passenger has survived from other known attributes. Let us group the data according to the Survived columns:
End of explanation
from pandas.plotting import scatter_matrix
scatter_matrix(data.get(['Fare', 'Pclass', 'Age']), alpha=0.2,
figsize=(8, 8), diagonal='kde');
Explanation: About two thirds of the passengers perished in the event. A dummy classifier that systematically returns "0" would have an accuracy of 62%, higher than that of a random model.
Some plots
Features densities and co-evolution
A scatterplot matrix allows us to visualize:
* on the diagonal, the density estimation for each feature
* on each of the off-diagonal plots, a scatterplot between two features. Each dot represents an instance.
End of explanation
data_plot = data.get(['Age', 'Survived'])
data_plot = data.assign(LogFare=lambda x : np.log(x.Fare + 10.))
scatter_matrix(data_plot.get(['Age', 'LogFare']), alpha=0.2, figsize=(8, 8), diagonal='kde');
data_plot.plot(kind='scatter', x='Age', y='LogFare', c='Survived', s=50, cmap=plt.cm.Paired);
Explanation: Non-linearly transformed data
The Fare variable has a very heavy tail. We can log-transform it.
End of explanation
import seaborn as sns
sns.set()
sns.set_style("whitegrid")
sns.jointplot(data_plot.Age[data_plot.Survived == 1],
data_plot.LogFare[data_plot.Survived == 1],
kind="kde", size=7, space=0, color="b");
sns.jointplot(data_plot.Age[data_plot.Survived == 0],
data_plot.LogFare[data_plot.Survived == 0],
kind="kde", size=7, space=0, color="y");
Explanation: Plot the bivariate distributions and marginals of two variables
Another way of visualizing relationships between variables is to plot their bivariate distributions.
End of explanation
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
categorical_cols = ['Sex', 'Pclass', 'Embarked']
numerical_cols = ['Age', 'SibSp', 'Parch', 'Fare']
preprocessor = make_column_transformer(
(OneHotEncoder(handle_unknown='ignore'), categorical_cols),
(SimpleImputer(strategy='constant', fill_value=-1), numerical_cols),
)
Explanation: Making predictions
A basic prediction workflow, using scikit-learn, will be presented below.
First, we will perform some simple preprocessing of our data:
one-hot encode the categorical features: Sex, Pclass, Embarked
for the numerical columns Age, SibSp, Parch, Fare, fill in missing values with a default value (-1)
all remaining columns will be dropped
This can be done succintly with make_column_transformer which performs specific transformations on specific features.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
pipeline = Pipeline([
('transformer', preprocessor),
('classifier', LogisticRegression()),
])
Explanation: The preprocessor object created with make_column_transformer can be used in a scikit-learn pipeline. A pipeline assembles several steps together and can be used to cross validate an entire workflow. Generally, transformation steps are combined with a final estimator.
We will create a pipeline consisting of the preprocessor created above and a final estimator, LogisticRegression.
End of explanation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(pipeline, X_train, y_train, cv=8, scoring='roc_auc')
print("mean: %e (+/- %e)" % (scores.mean(), scores.std()))
Explanation: We can cross-validate our pipeline using cross_val_score. Below we will have specified cv=8 meaning KFold cross-valdiation splitting will be used, with 8 folds. The Area Under the Receiver Operating Characteristic Curve (ROC AUC) score is calculated for each split. The output score will be an array of 8 scores from each KFold. The score mean and standard of the 8 scores is printed at the end.
End of explanation
# test_filename = 'data/test.csv'
# data = pd.read_csv(test_filename)
# y_test = data['Survived'].values
# X_test = data.drop(['Survived', 'PassengerId'], axis=1)
# X_test.head(5)
Explanation: Testing
Once you have created a model with cross-valdiation scores you are happy with, you can test how well your model performs on the independent test data.
First we will read in our test data:
End of explanation
# clf = pipeline.fit(X_train, y_train)
Explanation: Next we need to fit our pipeline on our training data:
End of explanation
# y_pred = pipeline.predict(X_test)
Explanation: Now we can predict on our test data:
End of explanation
# from sklearn.metrics import roc_auc_score
# score = roc_auc_score(y_test, y_pred)
# score
Explanation: Finally, we can calculate how well our model performed on the test data:
End of explanation
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
def get_estimator():
categorical_cols = ['Sex', 'Pclass', 'Embarked']
numerical_cols = ['Age', 'SibSp', 'Parch', 'Fare']
preprocessor = make_column_transformer(
(OneHotEncoder(handle_unknown='ignore'), categorical_cols),
(SimpleImputer(strategy='constant', fill_value=-1), numerical_cols),
)
pipeline = Pipeline([
('transformer', preprocessor),
('classifier', LogisticRegression()),
])
return pipeline
Explanation: RAMP submissions
For submitting to the RAMP site, you will need to write a submission.py file that defines a get_estimator function that returns a scikit-learn pipeline.
For example, to submit our basic example above, we would define our pipeline within the function and return the pipeline at the end. Remember to include all the necessary imports at the beginning of the file.
End of explanation
# !ramp_test_submission
Explanation: If you take a look at the sample submission in the directory submissions/starting_kit, you will find a file named submission.py, which has the above code in it.
You can test that the sample submission works by running ramp_test_submission in your terminal (ensure that ramp-workflow has been installed and you are in the titanic ramp kit directory). Alternatively, within this notebook you can run:
End of explanation
<END_TASK> |
15,942 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
QuTiP Lecture
Step1: Introduction
The quantum two-level system (TLS) is the simplest possible model for quantum light-matter interaction. In the version we simulate here, the system is driven by a continuous-mode coherent state, whose dipolar interaction with the system is represented by the following Hamiltonain
$$ H =\hbar \omega_0 \sigma^\dagger \sigma + \frac{\hbar\Omega(t)}{2}\left( \sigma\textrm{e}^{-i\omega_dt} + \sigma^\dagger \textrm{e}^{i\omega_dt}\right),$$
where $\omega_0$ is the system's transition frequency, $\sigma$ is the system's atomic lowering operator, $\omega_d$ is the coherent state's center frequency, and $\Omega(t)$ is the coherent state's driving strength.
The time-dependence can be removed to simplify the simulation by a rotating frame transformation, and is particularly simple when the driving field is resonant with the transition frequency ($\omega_d=\omega_0$). Then,
$$ H_r =\frac{\hbar\Omega(t)}{2}\left( \sigma+ \sigma^\dagger \right).$$
Problem parameters
We will explore emission from the two-level system under two different regimes
Step2: Setup the operators, Hamiltonian, and initial state
Step3: Calculate the emission flux
We evolve the system with the Lindblad master equation solver, and we request that the expectation values of the number operator $\hat{n}=\sigma^{\dagger} \sigma$ are returned by the solver. If the probability of two photodetections were negligible over the course of the pulse, then $\langle \hat{n}(t) \rangle$ would be the probability density of a detection occuring on an ideal detector at time $t$.
Step4: Visualize the emission flux
We plot the emission flux from two different three-level systems. The system labelled 'exponential wavepacket' was excited with a short pulse, while the system labelled 'Gaussian wavepacket' was excited with a long pulse.
Step5: Calculate the correlators involved in two-photon interference
When two indistinguishable photons impinge on two ports of a beamsplitter at the same time, they exit together. Such a configuration, with intensity detectors at each output, is known as a Hong-Ou-Mandel interferometer. This interferometer measures two-photon interference in the pulse-wise cross-correlation of the detection records of its photon counters.
If each of the two interfering wavepackets has unity average photon number, then the formula for this normalized cross-correlation is
$$ g^{(2)}\textrm{HOM}[0] = \frac{1}{4}\left(g^{(2)}{aa}[0] + g^{(2)}_{bb}[0])\right) + \frac{1}{2}\left(1-\textrm{Re}\iint \mathop{\textrm{d} t} \mathop{\textrm{d} t'} \left[G^{(1)}_a(t,t')\right]^*G^{(1)}_b(t,t')\right) ,$$
where $a$ and $b$ label the wavepackets, $g^{(2)}{aa(bb)}[0]$ are the sources' measured degrees of second-order coherence, and $G^{(1)}{a(b)}(t,t')$ are the sources' first-order optical coherences. Specifically
$$ G^{(1)}{a(b)}(t,t') = \gamma \langle \sigma{a(b)}^\dagger(t) \sigma_{a(b)}(t') \rangle,$$
which we will calculate in this notebook with the master equation solver and quantum regression theorem. Importantly, if $g^{(2)}\textrm{HOM}[0]=0$ then the photons always exit an output port of the beamsplitter together. Note
Step6: Interpolate these functions, in preparation for time delays
Step7: Calculate measured degrees of HOM cross-correlation
Step8: Visualize the two-photon interference visibilities
Here, the exponential wavepackets show good HOM interference visibility at zero delay, dipping down to almost zero (red). Meanwhile, in the case of the Gaussian wavepackets, the significant probability of re-excitation (due to $g^{(2)}_{aa}[0]=0.44$) partially destroys the HOM interference (blue); the HOM dip at zero delay still is below the nonclassical threshold (dashed line). However, if the Gaussian and exponential wavepackets, each generated by two-level systems, are interfered with one-another then the HOM interference is not below the nonclassical threshold (purple). This is a result of the re-excitation action that scrambles the phase of the first-order coherence.
Step9: Versions | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp2d
from qutip import *
Explanation: QuTiP Lecture: Pulse-wise two-photon interference of emission from a two-level system
K.A. Fischer, Stanford University
This Jupyter notebook demonstrates how to simulate the two-photon interference between two potentially identical two-level systems, using QuTiP: The Quantum Toolbox in Python. The purpose is to help characterize the quality of a two-level system as a single-photon source and the capability of two such sources to interfere; ideal pulsed single-photon sources have both zero second-order coherence and complete first-order coherence, and the sources must match in spatio-temporal overlap to be indistinguishable. In this notebook, we will assume the spatial modes are comparable so we can focus on temporal effects. This notebook closely follows an example from my simulation paper, <a href="http://dx.doi.org/10.1088/1367-2630/18/11/113053">Dynamical modeling of pulsed two-photon interference</a>, published as New J. Phys. 18 113053 (2016).
For more information about QuTiP see the project web page: http://qutip.org/
End of explanation
# shared parameters
gamma = 1 # decay rate
tlist = np.linspace(0, 13, 300)
taulist = tlist
# parameters for TLS with exponential shape wavepacket (short pulse)
tp_e = 0.060 # Gaussian pulse parameter
Om_e = 19.40 # driving strength
t_offset_e = 0.405
pulse_shape_e = Om_e/2 * np.exp(-(tlist - t_offset_e) ** 2 /
(2 * tp_e ** 2))
# parameters for TLS with Gaussian shape wavepacket (long pulse)
tp_G = 2.000 # Gaussian pulse parameter
Om_G = 0.702 # driving strength
t_offset_G = 5
pulse_shape_G = Om_G/2 * np.exp(-(tlist - t_offset_G) ** 2 /
(2 * tp_G ** 2))
Explanation: Introduction
The quantum two-level system (TLS) is the simplest possible model for quantum light-matter interaction. In the version we simulate here, the system is driven by a continuous-mode coherent state, whose dipolar interaction with the system is represented by the following Hamiltonain
$$ H =\hbar \omega_0 \sigma^\dagger \sigma + \frac{\hbar\Omega(t)}{2}\left( \sigma\textrm{e}^{-i\omega_dt} + \sigma^\dagger \textrm{e}^{i\omega_dt}\right),$$
where $\omega_0$ is the system's transition frequency, $\sigma$ is the system's atomic lowering operator, $\omega_d$ is the coherent state's center frequency, and $\Omega(t)$ is the coherent state's driving strength.
The time-dependence can be removed to simplify the simulation by a rotating frame transformation, and is particularly simple when the driving field is resonant with the transition frequency ($\omega_d=\omega_0$). Then,
$$ H_r =\frac{\hbar\Omega(t)}{2}\left( \sigma+ \sigma^\dagger \right).$$
Problem parameters
We will explore emission from the two-level system under two different regimes: under excitation by a short pulse which gives rise to an exponential wavepacket and under excitation by a long pulse which gives rise to a Gaussian wavepacket. (Short and long are relative to the spontaneous emission time of the atomic transition.) In both cases, the driving strengths are chosen such that the expected number of photodetections is unity, i.e.
$$ \gamma\int \langle \sigma^\dagger (t) \sigma(t)\rangle=1 .$$
As a result, we can compare the statistics of the emission directly and the normalizations become trivial.
Note, we use units where $\hbar=1$.
End of explanation
# initial state
psi0 = fock(2, 0) # ground state
# operators
sm = destroy(2) # atomic lowering operator
n = [sm.dag()*sm] # number operator
# Hamiltonian
H_I = sm + sm.dag()
H_e = [[H_I, pulse_shape_e]]
H_G = [[H_I, pulse_shape_G]]
# collapse operator that describes dissipation
c_ops = [np.sqrt(gamma) * sm] # represents spontaneous emission
Explanation: Setup the operators, Hamiltonian, and initial state
End of explanation
n_e = mesolve(H_e, psi0, tlist, c_ops, n).expect[0]
n_G = mesolve(H_G, psi0, tlist, c_ops, n).expect[0]
Explanation: Calculate the emission flux
We evolve the system with the Lindblad master equation solver, and we request that the expectation values of the number operator $\hat{n}=\sigma^{\dagger} \sigma$ are returned by the solver. If the probability of two photodetections were negligible over the course of the pulse, then $\langle \hat{n}(t) \rangle$ would be the probability density of a detection occuring on an ideal detector at time $t$.
End of explanation
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(tlist, n_e, 'r', label="exponential wavepacket")
ax.plot(tlist, n_G, 'b', label="Gaussian wavepacket")
ax.legend()
ax.set_xlim(0, 13)
ax.set_ylim(0, 1)
ax.set_xlabel('Time, $t$ [$1/\gamma$]')
ax.set_ylabel('Emission flux [$\gamma$]')
ax.set_title('TLS emission shapes');
Explanation: Visualize the emission flux
We plot the emission flux from two different three-level systems. The system labelled 'exponential wavepacket' was excited with a short pulse, while the system labelled 'Gaussian wavepacket' was excited with a long pulse.
End of explanation
# specify relevant operators to calculate the correlation
# <A(t+tau)B(t)>
a_op = sm.dag()
b_op = sm
# calculate two-time correlations
G1_t_tau_e = correlation_2op_2t(H_e, psi0, tlist, taulist, c_ops,
a_op, b_op, reverse=True)
G1_t_tau_e_r = correlation_2op_2t(H_e, psi0, tlist, taulist, c_ops,
a_op, b_op)
G1_t_tau_G = correlation_2op_2t(H_G, psi0, tlist, taulist, c_ops,
a_op, b_op, reverse=True)
G1_t_tau_G_r = correlation_2op_2t(H_G, psi0, tlist, taulist, c_ops,
a_op, b_op)
# g^(2)[0] values calculated for the sources in question in the
# notebook 'Pulse-wise second-order optical coherences of emission
# from a two-level system'
g20_e = 0.03
g20_G = 0.44
Explanation: Calculate the correlators involved in two-photon interference
When two indistinguishable photons impinge on two ports of a beamsplitter at the same time, they exit together. Such a configuration, with intensity detectors at each output, is known as a Hong-Ou-Mandel interferometer. This interferometer measures two-photon interference in the pulse-wise cross-correlation of the detection records of its photon counters.
If each of the two interfering wavepackets has unity average photon number, then the formula for this normalized cross-correlation is
$$ g^{(2)}\textrm{HOM}[0] = \frac{1}{4}\left(g^{(2)}{aa}[0] + g^{(2)}_{bb}[0])\right) + \frac{1}{2}\left(1-\textrm{Re}\iint \mathop{\textrm{d} t} \mathop{\textrm{d} t'} \left[G^{(1)}_a(t,t')\right]^*G^{(1)}_b(t,t')\right) ,$$
where $a$ and $b$ label the wavepackets, $g^{(2)}{aa(bb)}[0]$ are the sources' measured degrees of second-order coherence, and $G^{(1)}{a(b)}(t,t')$ are the sources' first-order optical coherences. Specifically
$$ G^{(1)}{a(b)}(t,t') = \gamma \langle \sigma{a(b)}^\dagger(t) \sigma_{a(b)}(t') \rangle,$$
which we will calculate in this notebook with the master equation solver and quantum regression theorem. Importantly, if $g^{(2)}\textrm{HOM}[0]=0$ then the photons always exit an output port of the beamsplitter together. Note: for the sources in question $g^{(2)}{aa(bb)}[0]$ was calculated in the QuTiP example, <i>Pulse-wise second-order optical coherences of emission from a two-level system</i>.
End of explanation
t_delay_list = np.linspace(-5, 0, 50)
TAULIST, TLIST = np.meshgrid(taulist, tlist)
c1_e = interp2d(taulist, tlist, np.real(G1_t_tau_e))
c1_e_f = lambda tau, t: c1_e(tau, t)
c2_e = interp2d(taulist, tlist, np.real(G1_t_tau_e_r))
c2_e_f = lambda tau, t: c2_e(tau, t)
c1_G = interp2d(taulist, tlist, np.real(G1_t_tau_G))
c1_G_f = lambda tau, t: c1_G(tau, t)
c2_G = interp2d(taulist, tlist, np.real(G1_t_tau_G_r))
c2_G_f = lambda tau, t: c2_G(tau, t)
Explanation: Interpolate these functions, in preparation for time delays
End of explanation
# two delayed exponential wavepackets interfere
def g2HOM_exponential(t_delay):
corr_e = np.array(
[[c1_e_f(tau, t)[0] * c2_e_f(tau, t - t_delay)[0]
for tau in taulist]
for t in tlist]
)
return g20_e/2 + 1/2*abs(1 -
2 * np.trapz(np.trapz(corr_e, TLIST, axis=0), taulist)
)
g2HOM_e = parallel_map(g2HOM_exponential, t_delay_list)
# two delayed Gaussian wavepackets interfere
def g2HOM_Gaussian(t_delay):
corr_G = np.array(
[[c1_G_f(tau, t)[0] * c2_G_f(tau, t - t_delay)[0]
for tau in taulist]
for t in tlist]
)
return g20_G/2 + 1/2*abs(1 -
2 * np.trapz(np.trapz(corr_G, TLIST, axis=0), taulist)
)
g2HOM_G = parallel_map(g2HOM_Gaussian, t_delay_list)
# a delayed Gaussian wavepacket interferes with an exponential
# wavepacket
def g2HOM_Gaussian_exp(t_delay):
corr_Ge = np.array(
[[c1_e_f(tau, t)[0] * c2_G_f(tau, t - t_delay)[0]
for tau in taulist]
for t in tlist]
)
return (g20_e + g20_G)/4 + 1/2*abs(1 -
2 * np.trapz(np.trapz(corr_Ge, TLIST, axis=0), taulist)
)
g2HOM_Ge = parallel_map(g2HOM_Gaussian_exp, t_delay_list + 5.45)
Explanation: Calculate measured degrees of HOM cross-correlation
End of explanation
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(np.concatenate([t_delay_list, -t_delay_list[-2::-1]]),
np.concatenate([g2HOM_e, g2HOM_e[-2::-1]]), 'r',
label="Two exponential")
ax.plot(np.concatenate([t_delay_list, -t_delay_list[-2::-1]]),
np.concatenate([g2HOM_G, g2HOM_G[-2::-1]]), 'b',
label="Two Gaussian")
ax.plot(np.concatenate([t_delay_list, -t_delay_list[-2::-1]]),
np.concatenate([g2HOM_Ge, g2HOM_Ge[-2::-1]]), 'purple',
label="One exponential &\n one Gaussian")
ax.legend(loc=3)
ax.set_xlim(-5, 5)
ax.axhline(y=0.5, color='k', linestyle='dashed')
ax.set_xlabel('Wavepacket delay [$1/\gamma$]')
ax.set_ylabel('$g^{(2)}_{HOM}[0]$')
ax.set_title('Two-photon interference visibilities for '+
'delayed wavepackets');
Explanation: Visualize the two-photon interference visibilities
Here, the exponential wavepackets show good HOM interference visibility at zero delay, dipping down to almost zero (red). Meanwhile, in the case of the Gaussian wavepackets, the significant probability of re-excitation (due to $g^{(2)}_{aa}[0]=0.44$) partially destroys the HOM interference (blue); the HOM dip at zero delay still is below the nonclassical threshold (dashed line). However, if the Gaussian and exponential wavepackets, each generated by two-level systems, are interfered with one-another then the HOM interference is not below the nonclassical threshold (purple). This is a result of the re-excitation action that scrambles the phase of the first-order coherence.
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation
<END_TASK> |
15,943 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Comparing surrogate models
Tim Head, July 2016.
Step1: Bayesian optimization or sequential model-based optimization uses a surrogate model
to model the expensive to evaluate function func. There are several choices
for what kind of surrogate model to use. This example compares the performance of
Step2: This shows the value of the two-dimensional branin function and the three minima.
Objective
The objective of this example is to find one of these minima in as few iterations
as possible. One iteration is defined as one call to the branin function.
We will evaluate each model several times using a different seed for the
random number generator. Then compare the average performance of these
models. This makes the comparison more robust against models that get
"lucky".
Step3: Note that this can take a few minutes. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(123)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 6)
plt.set_cmap("viridis")
Explanation: Comparing surrogate models
Tim Head, July 2016.
End of explanation
from skopt.benchmarks import branin as _branin
def branin(x, noise_level=0.):
return _branin(x) + noise_level * np.random.randn()
from matplotlib.colors import LogNorm
def plot_branin():
fig, ax = plt.subplots()
x1_values = np.linspace(-5, 10, 100)
x2_values = np.linspace(0, 15, 100)
x_ax, y_ax = np.meshgrid(x1_values, x2_values)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (100, 100))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()))
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14, lw=0, label="Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plot_branin()
Explanation: Bayesian optimization or sequential model-based optimization uses a surrogate model
to model the expensive to evaluate function func. There are several choices
for what kind of surrogate model to use. This example compares the performance of:
gaussian processes,
extra trees, and
random forests
as surrogate models. A purely random optimization strategy is used as a baseline.
Toy model
We will use the branin function as toy model for the expensive function. In
a real world application this function would be unknown and expensive to evaluate.
End of explanation
from functools import partial
from skopt import gp_minimize, forest_minimize, dummy_minimize
func = partial(branin, noise_level=2.0)
bounds = [(-5.0, 10.0), (0.0, 15.0)]
x0 = [2.5, 7.5]
n_calls = 80
def run(minimizer, n_iter=20):
return [minimizer(func, bounds, x0=x0, n_calls=n_calls, random_state=n)
for n in range(n_iter)]
# Random search
dummy_res = run(dummy_minimize)
# Gaussian processes
gp_res = run(gp_minimize)
# Random forest
rf_res = run(partial(forest_minimize, base_estimator="rf"))
# Extra trees
et_res = run(partial(forest_minimize, base_estimator="et"))
Explanation: This shows the value of the two-dimensional branin function and the three minima.
Objective
The objective of this example is to find one of these minima in as few iterations
as possible. One iteration is defined as one call to the branin function.
We will evaluate each model several times using a different seed for the
random number generator. Then compare the average performance of these
models. This makes the comparison more robust against models that get
"lucky".
End of explanation
from skopt.plots import plot_convergence
plot_convergence(("dummy_minimize", dummy_res),
("gp_minimize", gp_res),
("forest_minimize('rf')", rf_res),
("forest_minimize('et)", et_res),
true_minimum=0.397887, yscale="log")
Explanation: Note that this can take a few minutes.
End of explanation
<END_TASK> |
15,944 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
<END_TASK> |
15,945 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I'm using tensorflow 2.10.0. | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
np.random.seed(10)
a = tf.constant(np.random.rand(50, 100, 512))
def g(a):
return tf.expand_dims(a, 2)
result = g(a.__copy__())
<END_TASK> |
15,946 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Collation outputs
Introduction
In practice
Table
Step1: Table
Step2: Before moving to the other outputs, try to produce the simple HTML output by changing the code above. The value required in the output keyword should be html.
Table
Step3: Table
Step4: And, finally, you can test the XML/TEI output that produces XML following the TEI parallel segmentation encoding guidelines.
Step5: Graph | <ASSISTANT_TASK:>
Python Code:
#import the collatex library
from collatex import *
#create an instance of the collateX engine
collation = Collation()
#add witnesses to the collateX instance
collation.add_plain_witness( "A", "The quick brown fox jumped over the lazy dog.")
collation.add_plain_witness( "B", "The brown fox jumped over the dog." )
collation.add_plain_witness( "C", "The bad fox jumped over the lazy dog." )
#collate the witnesses and store the result in a vaiable called 'table'
#as we have not specified an output this will be sored in plain text
table = collate(collation)
#print the collation result
print(table)
Explanation: Collation outputs
Introduction
In practice
Table: HTML
Table: JSON
Table: XML and XML/TEI
Graph: SVG
Exercise
What's next
Introduction
In this tutorial we will be trying different outputs for our collation, meaning different graphical representations, formats and visualizations of the result.
The visualization of the collation result is an open discussion: several possibilities have been suggested and used and new ones are always being proposed. When the output of the collation is a printed format, such as a book, it is rare to see anything different from the traditional critical apparatus. Now that output formats are more frequently digital (or at least have a digital component), collation tools tend to offer more than one visualization option. This is the case for both Juxta and CollateX. The different visualizations are not incompatible; on the contrary, they can be complementary, highlighting different aspects of the result and suitable for different users or different stages of the workflow.
In the previous tutorials we used the alignment table and the graph. The alignment table, in use since the 1960's, is the equivalent of the matrix of bioinformatic for sequence alignment (for example, strings of DNA). In contrast, the graph is meant to represent the fluidity of the text and its variation. The idea of a graph-oriented model for expressing textual variance has been originally developed by Desmond Schmidt (2008). You can refer to this video, for a presentation on Apparatus vs. Graph – an Interface as Scholarly Argument by Tara Andrews and Joris van Zundert.
Other outputs, such as the histogram and the side-by-side visualization offered by Juxta, allow users to visualize the result of the comparison between two witnesses only. This reflects the way the algorithm is built and shows that the graphical representation is connected with the approach to collation that informs the software.
CollateX has two main ways to conceive of the collation result: as a table (with many different formatting options) and as a graph:
- table formats
- plain text table (no need to specify the output)
- HTML table (output='html')
- HTML vertical table with colors (output='html2')
- JSON (output='json')
- XML (output='xml')
- XML/TEI (output='tei')
- graph format
- SVG (output='svg')
In practice
Even though we have already encountered some of these outputs, it is worth going through them one more time focussing on part of the code that needs to change to produce the different formats.
Table: plain text
In this tutorial we will use some simple texts already used in the previous tutorial: the fox and dog example.
Let's start with the most simple output, for which we don't need to specify any output format (note that you can name the variable containing the output anything you like, but in this tutorial we call it alignment_table, table or graph)
In the code cell below the lines starting with a hash (#) are comments and are not executed. They are there in this instance to help you remember what the different parts of the code do. You do not need to use them in your notebook (although sometimes it is helpful to add comments to your code so you remember what things do).
End of explanation
table = collate(collation, output='html2')
Explanation: Table: HTML
Now let's try a different output. This time we still want a table format but instead of it being in plain text we would like it exported in HTML (the markup language used for web pages), and we would like it to be displayed vertically with nice colors to highlight the comparison. To achieve this all you need to do is add the keyword output to the collate command and give it that value html2.
End of explanation
table = collate(collation, output='json')
print(table)
Explanation: Before moving to the other outputs, try to produce the simple HTML output by changing the code above. The value required in the output keyword should be html.
Table: JSON
The same alignment table can be exported in a variety of formats, as we have seen, including JSON (Javascript Object Notation), a format widely used for storing and interchanging data nowadays. In order to produce JSON as output, we need to specify json as the output format.
End of explanation
table = collate(collation, output='xml')
print(table)
Explanation: Table: XML and XML/TEI
We can use the same procedure in order to export the table in XML or XML/TEI (the latter produces a condensed version of the table only listing witnesses at points of divergence - also called a negative apparatus). To do this you just specify a different output format. Let's start with the XML output (that you can later post-process using XSLT or other tools).
End of explanation
table = collate(collation, output='tei')
print(table)
Explanation: And, finally, you can test the XML/TEI output that produces XML following the TEI parallel segmentation encoding guidelines.
End of explanation
graph = collate(collation, output='svg')
Explanation: Graph: SVG
And now for something different: try with the graph, exported in the SVG format
End of explanation
<END_TASK> |
15,947 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
How do I find all rows in a pandas DataFrame which have the max value for count column, after grouping by ['Sp','Value'] columns? | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'Sp':['MM1','MM1','MM1','MM2','MM2','MM2','MM4','MM4','MM4'],
'Value':['S1','S1','S3','S3','S4','S4','S2','S2','S2'],
'Mt':['a','n','cb','mk','bg','dgd','rd','cb','uyi'],
'count':[3,2,5,8,10,1,2,2,7]})
def g(df):
return df[df.groupby(['Sp', 'Value'])['count'].transform(max) == df['count']]
result = g(df.copy())
<END_TASK> |
15,948 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Study of Correlation Between Building Demolition and Associated Features
Capstone Project for Data Science at Scale on Coursera
Repo is located here
Chen Yang yangcnju@gmail.com
Step1: Objective
Build a model to make predictions on blighted buildings based on real data from data.detroitmi.gov as given by coursera.
Building demolition is very important for the city to turn around and revive its economy. However, it's no easy task. Accurate predictions can provide guidance on potential blighted buildings and help avoid complications at early stages.
Building List
The buildings were defined as described below
Step2: Features
Three kinds (311-calls, blight-violations, and crimes) of incident counts and coordinates (normalized) was used in the end. I also tried to generate more features by differentiating each kind of crimes or each kind of violations in this notebook. However, these differentiated features lead to smaller AUC scores.
Data
The buildings were down-sampled to contain same number of blighted buildings and non-blighted ones.
The ratio between train and test was set at a ratio of 80
Step3: This model resulted in an AUC score of 0.858 on test data. Feature importances are shown below
Step4: Locations were most important features in this model. Although I tried using more features generated by differentiating different kind of crimes or violations, the AUC scores did not improve.
Feature importance can also be viewed using tree representation
Step5: To reduce variance of the model, since overfitting was observed during training. I also tried to reduce variance by including in more nonblighted buildings by sampling again multiple times with replacement (bagging).
A final AUC score of 0.8625 was achieved. The resulted ROC Curve on test data is shown below | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
Explanation: Study of Correlation Between Building Demolition and Associated Features
Capstone Project for Data Science at Scale on Coursera
Repo is located here
Chen Yang yangcnju@gmail.com
End of explanation
# The resulted buildings:
Image("./data/buildings_distribution.png")
Explanation: Objective
Build a model to make predictions on blighted buildings based on real data from data.detroitmi.gov as given by coursera.
Building demolition is very important for the city to turn around and revive its economy. However, it's no easy task. Accurate predictions can provide guidance on potential blighted buildings and help avoid complications at early stages.
Building List
The buildings were defined as described below:
Building sizes were estimated using parcel info downloaded here at data.detroitmi.gov. Details can be found in this notebook.
A event table was constructed from the 4 files (detroit-311.csv, detroit-blight-violations.csv, detroit-crime.csv, and detroit-demolition-permits.tsv) using their coordinates, as shown here.
Buildings were defined using these coordinates with an estimated building size (median of all parcels). Each building was represented as a same sized rectangle.
End of explanation
Image('./data/train_process.png')
Explanation: Features
Three kinds (311-calls, blight-violations, and crimes) of incident counts and coordinates (normalized) was used in the end. I also tried to generate more features by differentiating each kind of crimes or each kind of violations in this notebook. However, these differentiated features lead to smaller AUC scores.
Data
The buildings were down-sampled to contain same number of blighted buildings and non-blighted ones.
The ratio between train and test was set at a ratio of 80:20.
During training using xgboost, the train data was further separated into train and evaluation with a ratio of 80:20 for monitoring.
Model
A Gradient Boosted Tree model using Xgboost achieved AUC score of 0.85 on evaluation data set:
End of explanation
Image('./data/feature_f_scores.png')
Explanation: This model resulted in an AUC score of 0.858 on test data. Feature importances are shown below:
End of explanation
Image('./data/bst_tree.png')
Explanation: Locations were most important features in this model. Although I tried using more features generated by differentiating different kind of crimes or violations, the AUC scores did not improve.
Feature importance can also be viewed using tree representation:
End of explanation
Image('./data/ROC_Curve_combined.png')
Explanation: To reduce variance of the model, since overfitting was observed during training. I also tried to reduce variance by including in more nonblighted buildings by sampling again multiple times with replacement (bagging).
A final AUC score of 0.8625 was achieved. The resulted ROC Curve on test data is shown below:
End of explanation
<END_TASK> |
15,949 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Histograma e Estatística de uma imagem
Histograma de uma imagem
A imagem normalmente é composta de um grande número de pixels; hoje em dia,
celulares tiram fotografias com a resolução espacial que produz alguns milhões de pixels.
Uma das caracterizações ou "assinaturas" mais eficientes da imagem é seu histograma.
Estude o pequeno tutorial disponível em
Step1: Uso do matplotlib
Veja aqui neste breve tutorial como utilizar o matplotlib
Step2: Procure utilizar a função iah2stats operando no histograma para calcular estes mesmos valores.
utilize ia.iah2stats para calcular média e mediana da imagem da retina.tif
Step3: Histograma de imagens aleatórias
Calcule e plote os histogramas das imagens f1, f2 e f3 a seguir, dando uma justificativa para cada caso em relação ao formato do histograma.
Aproveite para estudar as funções de geração de dados aleatórios "randint", "shuffle" e "random.normal" | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
!ls -l ../../ia898/data
f = mpimg.imread('../data/retina.tif')
plt.imshow(f,cmap='gray');
h = ia.histogram(f)
plt.plot(h);
Explanation: Histograma e Estatística de uma imagem
Histograma de uma imagem
A imagem normalmente é composta de um grande número de pixels; hoje em dia,
celulares tiram fotografias com a resolução espacial que produz alguns milhões de pixels.
Uma das caracterizações ou "assinaturas" mais eficientes da imagem é seu histograma.
Estude o pequeno tutorial disponível em:
Tutorial - Histograma e Estatísticas
e após
isto, plote com adshow(iaplot()) o histograma da imagem a seguir:
End of explanation
%%time
print('f.mean()=', f.mean())
print('np.median(f)=', np.median(f))
%%time
h = ia.histogram(f)
v = ia.h2stats(h)
print('média:',v[0])
print('mediano:',v[8])
Explanation: Uso do matplotlib
Veja aqui neste breve tutorial como utilizar o matplotlib:
tutorial_matplotlib
Estatísticas de uma imagem
Estude a função ia636:iah2stats que calcula diversas estatísticas da imagem a partir de
seu histograma. Observe que estas estatísticas podem ser obtidas tanto diretamente da imagem como do histograma.
A grande vantagem de calcular a estatística a partir do histograma é que o histograma é tipicamente um vetor de
256 elementos enquanto que a imagem pode ser de milhões de pixels. Assim, quando se precisa calcular diversos
valores estatísticos de uma imagem, calcula-se seu histograma e posteriormente seus valores estatísticos, calculados
a partir do histogram.
Como ilustração, iremos medir o tempo para calcular a média (np.mean) e o valor mediano (np.median) fazendo o cálculo diretamente na imagem e comparar com o tempo de se calcular estas estatísticas a partir do histograma:
End of explanation
plt.imshow(f,cmap='gray')
h = ia.histogram(f)
v = ia.h2stats(h)
print('mean =',v[0])
print('variance =',v[1])
print('skewness =',v[2])
print('kurtosis = ',v[3])
print('entropy = ',v[4])
print('mode = ',v[5])
print('percentil 1% = ',v[6])
print('percentil 10% = ',v[7])
print('percentil 50% = ',v[8])
print('percentil 90% = ',v[9])
print('percentil 99% = ',v[10])
Explanation: Procure utilizar a função iah2stats operando no histograma para calcular estes mesmos valores.
utilize ia.iah2stats para calcular média e mediana da imagem da retina.tif
End of explanation
f1 = np.random.randint(0,256, (128,256)).astype(np.uint8)
plt.imshow( f1, 'f1')
h1 = ia.histogram(f1)
plt.bar( np.arange(h1.size), h1)
plt.imshow( ia.iafig2img(fig2), 'histograma de f1')
f_2 = np.resize(np.arange(256, dtype=np.uint8), (128,256)) # imagem rampa
f2 = np.ravel(f_2)
np.random.shuffle(f2)
adshow(f2.reshape(f_2.shape), 'f2')
h2 = ia.iahistogram(f2)
fig3 = plt.figure()
plt.bar( np.arange(h2.size), h2)
adshow( ia.iafig2img(fig3), 'histograma de f2')
f_3 = np.random.randn(128,256)
f3 = ia.ianormalize(f_3).astype(np.uint8)
adshow(f3, 'f3')
h3 = ia.iahistogram(f3)
fig4 = plt.figure()
plt.bar( np.arange(h3.size), h3)
adshow( ia.iafig2img(fig4), 'histograma de f3')
Explanation: Histograma de imagens aleatórias
Calcule e plote os histogramas das imagens f1, f2 e f3 a seguir, dando uma justificativa para cada caso em relação ao formato do histograma.
Aproveite para estudar as funções de geração de dados aleatórios "randint", "shuffle" e "random.normal":
End of explanation
<END_TASK> |
15,950 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The Basic Tools of the Deep Life Sciences
Welcome to DeepChem's introductory tutorial for the deep life sciences. This series of notebooks is a step-by-step guide for you to get to know the new tools and techniques needed to do deep learning for the life sciences. We'll start from the basics, assuming that you're new to machine learning and the life sciences, and build up a repertoire of tools and techniques that you can use to do meaningful work in the life sciences.
Scope
Step1: You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the deepchem package to play with.
Step2: Training a Model with DeepChem
Step3: I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the featurizer argument passed to the load_delaney() function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets
Step4: Here again I will not say much about the code. Later tutorials will give lots more information about GraphConvModel, as well as other types of models provided by DeepChem.
We now need to train the model on the data set. We simply give it the data set and tell it how many epochs of training to perform (that is, how many complete passes through the data to make).
Step5: If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling evaluate() on the model. For this example, let's use the Pearson correlation, also known as r<sup>2</sup>, as our metric. We can evaluate it on both the training set and test set.
Step6: Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set.
Our model still has quite respectable performance on the test set. For comparison, a model that produced totally random outputs would have a correlation of 0, while one that made perfect predictions would have a correlation of 1. Our model does quite well, so now we can use it to make predictions about other molecules we care about.
Since this is just a tutorial and we don't have any other molecules we specifically want to predict, let's just use the first ten molecules from the test set. For each one we print out the chemical structure (represented as a SMILES string) and the predicted log(solubility). To put these predictions in
context, we print out the log(solubility) values from the test set as well. | <ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
Explanation: The Basic Tools of the Deep Life Sciences
Welcome to DeepChem's introductory tutorial for the deep life sciences. This series of notebooks is a step-by-step guide for you to get to know the new tools and techniques needed to do deep learning for the life sciences. We'll start from the basics, assuming that you're new to machine learning and the life sciences, and build up a repertoire of tools and techniques that you can use to do meaningful work in the life sciences.
Scope: This tutorial will encompass both the machine learning and data handling needed to build systems for the deep life sciences.
Colab
This tutorial and the rest in the sequences are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Why do the DeepChem Tutorial?
1) Career Advancement: Applying AI in the life sciences is a booming
industry at present. There are a host of newly funded startups and initiatives
at large pharmaceutical and biotech companies centered around AI. Learning and
mastering DeepChem will bring you to the forefront of this field and will
prepare you to enter a career in this field.
2) Humanitarian Considerations: Disease is the oldest cause of human
suffering. From the dawn of human civilization, humans have suffered from pathogens,
cancers, and neurological conditions. One of the greatest achievements of
the last few centuries has been the development of effective treatments for
many diseases. By mastering the skills in this tutorial, you will be able to
stand on the shoulders of the giants of the past to help develop new
medicine.
3) Lowering the Cost of Medicine: The art of developing new medicine is
currently an elite skill that can only be practiced by a small core of expert
practitioners. By enabling the growth of open source tools for drug discovery,
you can help democratize these skills and open up drug discovery to more
competition. Increased competition can help drive down the cost of medicine.
Getting Extra Credit
If you're excited about DeepChem and want to get more involved, there are some things that you can do right now:
Star DeepChem on GitHub! - https://github.com/deepchem/deepchem
Join the DeepChem forums and introduce yourself! - https://forum.deepchem.io
Say hi on the DeepChem gitter - https://gitter.im/deepchem/Lobby
Make a YouTube video teaching the contents of this notebook.
Prerequisites
This tutorial sequence will assume some basic familiarity with the Python data science ecosystem. We will assume that you have familiarity with libraries such as Numpy, Pandas, and TensorFlow. We'll provide some brief refreshers on basics through the tutorial so don't worry if you're not an expert.
Setup
The first step is to get DeepChem up and running. We recommend using Google Colab to work through this tutorial series. You'll also need to run the following commands to get DeepChem installed on your colab notebook.
End of explanation
import deepchem as dc
dc.__version__
Explanation: You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the deepchem package to play with.
End of explanation
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
Explanation: Training a Model with DeepChem: A First Example
Deep learning can be used to solve many sorts of problems, but the basic workflow is usually the same. Here are the typical steps you follow.
Select the data set you will train your model on (or create a new data set if there isn't an existing suitable one).
Create the model.
Train the model on the data.
Evaluate the model on an independent test set to see how well it works.
Use the model to make predictions about new data.
With DeepChem, each of these steps can be as little as one or two lines of Python code. In this tutorial we will walk through a basic example showing the complete workflow to solve a real world scientific problem.
The problem we will solve is predicting the solubility of small molecules given their chemical formulas. This is a very important property in drug development: if a proposed drug isn't soluble enough, you probably won't be able to get enough into the patient's bloodstream to have a therapeutic effect. The first thing we need is a data set of measured solubilities for real molecules. One of the core components of DeepChem is MoleculeNet, a diverse collection of chemical and molecular data sets. For this tutorial, we can use the Delaney solubility data set. The property of solubility in this data set is reported in log(solubility) where solubility is measured in moles/liter.
End of explanation
model = dc.models.GraphConvModel(n_tasks=1, mode='regression', dropout=0.2)
Explanation: I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the featurizer argument passed to the load_delaney() function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets: a training set, a validation set, and a test set. Each of these serves a different function in the standard deep learning workflow.
Now that we have our data, the next step is to create a model. We will use a particular kind of model called a "graph convolutional network", or "graphconv" for short.
End of explanation
model.fit(train_dataset, nb_epoch=100)
Explanation: Here again I will not say much about the code. Later tutorials will give lots more information about GraphConvModel, as well as other types of models provided by DeepChem.
We now need to train the model on the data set. We simply give it the data set and tell it how many epochs of training to perform (that is, how many complete passes through the data to make).
End of explanation
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print("Training set score:", model.evaluate(train_dataset, [metric], transformers))
print("Test set score:", model.evaluate(test_dataset, [metric], transformers))
Explanation: If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling evaluate() on the model. For this example, let's use the Pearson correlation, also known as r<sup>2</sup>, as our metric. We can evaluate it on both the training set and test set.
End of explanation
solubilities = model.predict_on_batch(test_dataset.X[:10])
for molecule, solubility, test_solubility in zip(test_dataset.ids, solubilities, test_dataset.y):
print(solubility, test_solubility, molecule)
Explanation: Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set.
Our model still has quite respectable performance on the test set. For comparison, a model that produced totally random outputs would have a correlation of 0, while one that made perfect predictions would have a correlation of 1. Our model does quite well, so now we can use it to make predictions about other molecules we care about.
Since this is just a tutorial and we don't have any other molecules we specifically want to predict, let's just use the first ten molecules from the test set. For each one we print out the chemical structure (represented as a SMILES string) and the predicted log(solubility). To put these predictions in
context, we print out the log(solubility) values from the test set as well.
End of explanation
<END_TASK> |
15,951 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
DV360 Automation
Step1: 0.2 Setup your GCP project
To utilise the DV360 API, you need a Google Cloud project. For the purpose of this workshop, we've done this for you, but normally you'd have to complete the following steps, before you can make requests using the DV360 API
Step2: 0.4 Set DV360 account settings
Next, we need to set our DV360 parameters, and generate a sandbox (test) campaign.
Note, if you'd prefer to use an existing campaign, update CAMPAIGN_ID below.
Step4: Create a new 'sandbox' campaign to use with the rest of the exercises
Executing the following code block will overwrite any CAMPAIGN_ID used above.
Step5: 1A) SDF using DBM API (sunset)
Important
Step6: Define a boilerplate targeting template that all Line Items should adhere too
Step7: Modify latest SDF LineItems file and update the columns according to the targeting template
Step8: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.3 SDF + Entity Read Files
What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date.
ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets
Step9: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations
Step10: Download the latest SDF LineItems (because we've made changes since our last download)
Step11: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs
Step12: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.4 SDF + Cloud Vision API
Next, let's look at how we you can utilise external APIs.
Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV)
Execute the following code block and upload 'product_feed.csv'
This will create a new Python dictionary (key
Step14: Define a function to send images to the Cloud Vision API
Step15: Run our images through the function, and return a lookup table
Step16: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field
Step18: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.5 Optimisation using Reports
Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note
Step19: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5)
Step20: Note the only rows included in the output, are those that we want to modify.
Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.6 Challenge
Challenge
Step21: Solution
Step23: Upload the output .csv file in the DV360 UI
1B) SDF using DV360 API
Reference
Step24: Define a boilerplate targeting template that all Line Items should adhere too
Step25: Modify latest SDF LineItems file and update the columns according to the targeting template
Step26: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.3 SDF + Entity Read Files
What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date.
ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets
Step27: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations
Step28: Download the latest SDF LineItems (because we've made changes since our last download)
Step29: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs
Step30: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.4 SDF + Cloud Vision API
Next, let's look at how we you can utilise external APIs.
Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV)
Execute the following code block and upload 'product_feed.csv'
This will create a new Python dictionary (key
Step32: Define a function to send images to the Cloud Vision API
Step33: Run our images through the function, and return a lookup table (reference)
Step34: View the results of our Vision analysis
Step35: Download the latest SDF LineItems (because we've made changes since our last download)
Step36: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field
Step37: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.5 Optimisation using Reports
Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note
Step38: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5)
Step39: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.6 Challenge
Challenge
Step40: Solution
Step42: Upload the output .csv file in the DV360 UI
2) Display & Video 360 API
What is the Display & Video 360 API? The Display & Video 360 API (formly known as the DV360 Write API) is the programmatic interface for the Display & Video 360 platform. It allows developers to easily and efficiently automate complex Display & Video 360 workflows, such as creating insertion orders and setting targeting options for individual line items.
We'll use it now to build upon the campaign we created earlier using SDF.
Reference
Step43: Upload the extended feed for Google Store's new territories
Step45: Create Insertion Order template
Here we're defining a new a function called 'create_insertion_order'. Note
Step47: Create LineItem template
Here we define a new function called 'create_lineitem', based on a template we specified.
Note
Step48: Build our new campaign
First, we'll loop through the list of countries generated at the beginning, and for each country, create a new Insertion Order by calling our function 'create_insertion_order'. Within that loop, we find every product that is sold in the corresponding country-code, and create a new Line Item for every matching product using our function 'create_lineitem'.
Sit tight, this one can take a while (~10 mins)...
Link to DV360 UI
Step49: If successful, the result should look similar to the below in DV360
Step50: Apply individual targeting criteria to single entity
Step51: Applying individual targeting criteria to multiple entities
Step53: 2.3 Bulk targeting
Bulk updates using templated targeting controls
Step54: Retrieve list of active LineItems, and Apply bulk targeting
Step56: 2.4 Optimisation (external trigger)
The following optimisations will be completed on your campaign, created earlier.
Create functions to 'deactivate' or 'optimise' Lineitems
Step57: Creat list of out of stock products
Step58: Process optimisation
Step59: 2.5 Optimisation (reporting data)
As your new campaign has no performance data, the following optimisations will be completed on an existing campaign with historical data.
Create new performance report and fetch results
Step60: Load report to Pandas DataFrame
Step61: Create two lists of poorly performing LineItems
1. LineItems that should be paused
2. Lineitems to reduce bids
Step62: Process optimisation
Step64: 2.6 Creative upload
Uploading Display creatives from remote storage (http)
The following demonstrates how to upload image assets from remote storage, but it's also possible to upload from local storage.
Reference
Step65: Upload image creatives
Note, all of the following assets are the same dimension (300x250) and type 'CREATIVE_TYPE_STANDARD'.
When uploading assets of multiple sizes, the creatives.create body must reflect this.
Step66: 2.7 Challenge
Challenge
Step67: Solution
Step73: Link to DV360 UI
Resources
Getting started with SDF in DV360 guide
Structured Data Files (SDF) developer guide
Getting started with the Display & Video 360 API developer guide
Getting started with the DoubleClick Bid Manager API developer guide
How to access Entity Read Files
Quickstart | <ASSISTANT_TASK:>
Python Code:
!pip install google-api-python-client
!pip install google-cloud-vision
import csv
import datetime
import io
import json
import pprint
from google.api_core import retry
from google.cloud import vision
from google.colab import files
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient import discovery
from googleapiclient import http
import pandas as pd
import requests
print('Successfully imported Python libraries!')
Explanation: DV360 Automation: codelab
Author: Matt Lynam
Objective
Enable Display & Video 360 (DV360) advertisers to increase workflow efficiency by utilising the right automation solution according to their needs, resources and technical capability.
Goals
* Provide an overview of the current automation suite available in DV360
* Demonstrate the capabilities and limitations of DV360's UI and APIs
* Explore common advertiser use cases and pitfalls
* Acquire hands-on experience by applying key concepts using a fictional case study
0) Setup and authentication
Google Colab primer
Google Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with:
- Zero configuration required
- Free access to GPUs
- Easy sharing & colaboration
A notebook is a list of cells, containing either explanatory text or executable code and its output. This is a text cell.
Useful Colab tips
* Double-click within the cell to edit
* Code cells can be executed by clicking the Play icon in the left gutter of the cell; or with Cmd/Ctrl + Enter to run the cell in place;
* Use Cmd/Ctrl + / to comment out a line of code
0.1 Install Python client libraries
Run the following block to install the latest Google Python Client Library and import additional libraries used for this workshop.
End of explanation
API_SCOPES = ['https://www.googleapis.com/auth/doubleclickbidmanager',
'https://www.googleapis.com/auth/display-video',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/cloud-vision']
# Authenticate using user credentials stored in client_secrets.json
client_secrets_file = files.upload()
client_secrets_json = json.loads(next(iter(client_secrets_file.values())))
flow = InstalledAppFlow.from_client_config(client_secrets_json, API_SCOPES)
credentials = flow.run_console()
print('Success!')
# Build DBM Read API service object
dbm_service = discovery.build(
'doubleclickbidmanager', 'v1.1', credentials=credentials)
print('DBM API service object created')
# Build Google Cloud Storage Read API service object
gcs_service = discovery.build('storage', 'v1', credentials=credentials)
print('GCS service object created')
# Create Display Video API service object
display_video_service = discovery.build(
'displayvideo', 'v1', credentials=credentials)
print('Display Video API service object created')
Explanation: 0.2 Setup your GCP project
To utilise the DV360 API, you need a Google Cloud project. For the purpose of this workshop, we've done this for you, but normally you'd have to complete the following steps, before you can make requests using the DV360 API:
Select or create a Google Cloud Platform project.
Enable billing on your project.
Enable the 'Display & Video 360' and 'DoubleClick Bid Manager' API from the API library
Create GCP credentials
We've also generated credentials for you, but if you needed to generate new credentials, this would be the process:
Go to the API credentials page in the Cloud Platform Console.
Fill out the required fields on the OAuth consent screen.
On the credentials page, click Create credentials >> OAuth client ID.
Select Other as the application type, and then click Create.
Download the credentials by clicking the Download JSON button
Reference: https://developers.google.com/display-video/api/guides/how-tos/authorizing
0.3 Authentication
Next, we'll permission the application to submit authorised API requests on our behalf using OAuth authentication.
The following scopes are specified in an array:
* DBM API
* Display Video API
* GCP Storage Read
* Cloud Vision API
Reference:
* Example OAuth2 Python Library
* Google scopes
End of explanation
PARTNER_ID = '234340' #@param {type:"string"}
ADVERTISER_ID = '2436036' #@param {type:"string"}
CAMPAIGN_ID = '4258803' #@param {type:"string"}
# For use with legacy DBM API
SDF_VERSION = '5.3' #@param {type:"string"}
# For use with DV360 API
SDF_VERSION_DV360 = 'SDF_VERSION_5_3' #@param {type:"string"}
print('DV360 settings saved!')
Explanation: 0.4 Set DV360 account settings
Next, we need to set our DV360 parameters, and generate a sandbox (test) campaign.
Note, if you'd prefer to use an existing campaign, update CAMPAIGN_ID below.
End of explanation
YOUR_NAME = 'Matt' #@param {type:"string"}
# Set dates for new campaign
month = datetime.datetime.today().strftime('%m')
day = datetime.datetime.today().strftime('%d')
year = datetime.datetime.today().strftime('%Y')
month_plus30 = (datetime.datetime.today() +
datetime.timedelta(days=30)).strftime('%m')
day_plus30 = (datetime.datetime.today() +
datetime.timedelta(days=30)).strftime('%d')
year_plus30 = (datetime.datetime.today() +
datetime.timedelta(days=30)).strftime('%Y')
def create_campaign(YOUR_NAME):
Creates a new DV360 Campaign object.
campaign_name = f'{year}-{month}-{day} | {YOUR_NAME}'
campaign_obj = {
'displayName': campaign_name,
'entityStatus': 'ENTITY_STATUS_ACTIVE',
'campaignGoal': {
'campaignGoalType': 'CAMPAIGN_GOAL_TYPE_ONLINE_ACTION',
'performanceGoal': {
'performanceGoalType': 'PERFORMANCE_GOAL_TYPE_CPC',
'performanceGoalAmountMicros': 1000000
}
},
'campaignFlight': {
'plannedSpendAmountMicros': 1000000,
'plannedDates': {
'startDate': {
'year': year,
'month': month,
'day': day
},
'endDate': {
'year': year_plus30,
'month': month_plus30,
'day': day_plus30
}
}
},
'frequencyCap': {
'maxImpressions': 10,
'timeUnit': 'TIME_UNIT_DAYS',
'timeUnitCount': 1
}
}
# Create the campaign.
campaign = display_video_service.advertisers().campaigns().create(
advertiserId=ADVERTISER_ID,
body=campaign_obj
).execute()
return campaign
new_campaign = create_campaign(YOUR_NAME)
# Display the new campaign.
CAMPAIGN_ID = new_campaign['campaignId']
print(f"\nCampaign '{new_campaign['name']}' was created."
f"\nCampaign id: '{new_campaign['campaignId']}'"
f"\nCampaign name: '{new_campaign['displayName']}'"
f"\nCampaign status: '{new_campaign['entityStatus']}'")
Explanation: Create a new 'sandbox' campaign to use with the rest of the exercises
Executing the following code block will overwrite any CAMPAIGN_ID used above.
End of explanation
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
# Make the request to download all SDF LineItems for your new campaign
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
# Show sample (5 rows) of DataFrame
sdf_df.head()
Explanation: 1A) SDF using DBM API (sunset)
Important: the SDF resource (sdf.download) for the DBM API has migrated to a new endpoint (displayvideo.googleapis.com). SDF methods using this (doubleclickbidmanager.googleapis.com) endpoint have been sunset, and will not be updated moving forward.
Please follow track 1B, for code samples using the DV360 API.
Reference: https://developers.google.com/bid-manager/v1.1/sdf
Structured Data Files (SDF) are a way of using spreadsheets to make bulk changes to DV360 entities, including Campaigns, Insertion Orders, Line Items, TrueView Ad Groups, TrueView Ads and deals. SDF are the first step on the path to full automation in DV360, but only allow you to automate so far, as we'll explore now...
1.1 Manually create SDF
Create a copy of the Google Store product feed
Update the highlighted cells (B2:B3) on the tab called "sdf_insertionorders"
Save the updated "sdf_insertionorders" tab and "sdf_lineitems" tab to .CSV (File >> Download >> CSV)
Upload the two .CSV files together in the DV360 UI
This will create a very basic campaign, with 2 insertion orders, and 10 lineitems per insertion order.
1.2 Editing SDF programmatically
Our new LineItems are missing some important targeting and inventory controls:
* Channels (e.g. groups of publisher URLs)
* Inventory source
* Brand safety
* Geo targeting
Let’s use software to make these changes for us...
End of explanation
targeting_template = {
'Channel Targeting - Include':
'2580510;',
'Channel Targeting - Exclude':
'2580509;',
'Inventory Source Targeting - Include':
'1;',
'Inventory Source Targeting - Exclude':
'6; 8; 9; 10; 2; 11; 12; 13; 16; 20; 23; 27; 29; 30; 31; 34; 35; 36; '
'38; 43; 46; 50; 51; 56; 60; 63; 67; 74;',
'Digital Content Labels - Exclude':
'G; PG; T;',
'Brand Safety Sensitivity Setting':
'Use custom',
'Brand Safety Custom Settings':
'Adult; Alcohol; Derogatory; Downloads & Sharing; Drugs; Gambling; '
'Politics; Profanity; Religion; Sensitive social issues; Suggestive; '
'Tobacco; Tragedy; Transportation Accidents; Violence; Weapons;'
}
Explanation: Define a boilerplate targeting template that all Line Items should adhere too
End of explanation
# Overwrite targeting columns using 'targeting_template'
sdf_df['Channel Targeting - Include'] = targeting_template[
'Channel Targeting - Include']
sdf_df['Channel Targeting - Exclude'] = targeting_template[
'Channel Targeting - Exclude']
sdf_df['Inventory Source Targeting - Include'] = targeting_template[
'Inventory Source Targeting - Include']
sdf_df['Inventory Source Targeting - Exclude'] = targeting_template[
'Inventory Source Targeting - Exclude']
sdf_df['Digital Content Labels - Exclude'] = targeting_template[
'Digital Content Labels - Exclude']
sdf_df['Brand Safety Sensitivity Setting'] = targeting_template[
'Brand Safety Sensitivity Setting']
sdf_df['Brand Safety Custom Settings'] = targeting_template[
'Brand Safety Custom Settings']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update1_controls.csv', index=False)
# Show sample (5 rows) of DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update1_controls.csv')
print(
"Success, check your downloads for a file called 'sdf_update1_controls.csv'"
)
Explanation: Modify latest SDF LineItems file and update the columns according to the targeting template
End of explanation
# Actually today-7 to avoid issues with collection
yesterday = datetime.date.today() - datetime.timedelta(7)
# Download public ERF for geolocation info
request = gcs_service.objects().get_media(
bucket='gdbm-public',
object='entity/' + yesterday.strftime('%Y%m%d') + '.0.GeoLocation.json')
response = request.execute()
geolocations = json.loads(response)
print('GeoLocation.json successfully downloaded \n')
print("Here's a random sample of 5 entries:\n")
pprint.pprint(geolocations[0:5])
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.3 SDF + Entity Read Files
What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date.
ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets:
Public (10 .JSON files) - contain common public data such as GeoLocation and Language which are stored in the gdbm-public bucket (the same bucket for every DV360 user).
Private (13 .JSON files) - contain information about the DV360 Partner's campaigns, creatives, budgets and other private data and are stored in Partner-specific buckets (restricted to specific users)
Reference: https://developers.google.com/bid-manager/guides/entity-read/overview
ERFs can be used to speed up, and automate, the creation of SDF files.
Let's explore this now...
Download yesterday's GeoLocation.json from public ERF bucket using Google Cloud Storage API
End of explanation
# Provide a list of store locations
store_locations = ['United Kingdom', 'France', 'Spain', 'Germany', 'Portugal']
# Create a new dictionary to save the country code and ID later on
geo_targeting_ids = {}
# Note: GeoLocation.json is over 800,000 lines
for location in geolocations:
if location['canonical_name'] in store_locations:
geo_targeting_ids[location['country_code']] = location['id']
print(location)
print(geo_targeting_ids)
Explanation: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations
End of explanation
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
# Make the request to download all SDF LineItems for your new campaign
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
# Show sample (5 rows) of DataFrame
sdf_df.head()
Explanation: Download the latest SDF LineItems (because we've made changes since our last download)
End of explanation
for country in geo_targeting_ids:
target_country = geo_targeting_ids[country]
sdf_df.loc[sdf_df.Name.str.contains(country),
'Geography Targeting - Include'] = f'{target_country};'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update2_geo.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update2_geo.csv')
print("Success, look for a file called 'sdf_update2_geo.csv' in your downloads folder")
Explanation: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs
End of explanation
# Upload product feed using Colab's upload utility
product_feed_csv = files.upload()
contents = next(iter(product_feed_csv.values())).decode('utf-8')
products = csv.DictReader(io.StringIO(contents))
image_url_list = {}
# Iterate through each row and update dict() with sku:link
for row in products:
image_url_list[row['sku']] = row['image_link']
pprint.pprint(image_url_list)
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.4 SDF + Cloud Vision API
Next, let's look at how we you can utilise external APIs.
Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV)
Execute the following code block and upload 'product_feed.csv'
This will create a new Python dictionary (key:value pairing), mapping SKUs with their image link
Warning: Cloud Vision API is paid product, utilising the following example in your own Cloud project will incur costs.
Try out the Cloud Vision API for free at cloud.google.com/vision
End of explanation
def vision_analysis(image_url):
Process images using the Cloud Vision API.
# Assign image URL
image = vision.Image()
image.source.image_uri = image_url
# Instantiates a Vision client
client = vision.ImageAnnotatorClient(credentials=credentials)
# Performs label detection on the image file
vision_response = client.label_detection(image=image)
dv360_targeting_keywords = []
labels = []
for label in vision_response.label_annotations:
dv360_targeting_keywords.append(label.description)
label = f'{label.description} ({label.score:.2%})'
labels.append(label)
return dv360_targeting_keywords, labels
Explanation: Define a function to send images to the Cloud Vision API
End of explanation
imageslookup = {}
for sku, url in image_url_list.items():
imageslookup[sku], vision_labels = vision_analysis(url)
print(f'Analysis completed for: {url}')
print('Labels (confidence score):')
pprint.pprint(vision_labels, indent=4)
print('=' * 30)
print('\n\nLookup table:')
pprint.pprint(imageslookup, indent=4)
Explanation: Run our images through the function, and return a lookup table
End of explanation
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
for product in imageslookup:
sdf_df.loc[sdf_df.Name.str.contains(product),
'Keyword Targeting - Include'] = ';'.join(
imageslookup[product]).lower()
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update3_keywords.csv', index=False)
# Show sample (5 rows) of DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update3_keywords.csv')
print("Success, look for the file called 'sdf_update3_keywords.csv' in your downloads folder")
Explanation: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field
End of explanation
# Define DV360 report definition (i.e. metrics and filters)
report_definition = {
'params': {
'type': 'TYPE_GENERAL',
'metrics': [
'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR',
'METRIC_REVENUE_ADVERTISER'
],
'groupBys': [
'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM',
'FILTER_ADVERTISER_CURRENCY'
],
'filters': [{
'type': 'FILTER_ADVERTISER',
'value': ADVERTISER_ID
}],
},
'metadata': {
'title': 'DV360 Automation API-generated report',
'dataRange': 'LAST_90_DAYS',
'format': 'csv'
},
'schedule': {
'frequency': 'ONE_TIME'
}
}
# Create new query using report definition
operation = dbm_service.queries().createquery(body=report_definition).execute()
pprint.pprint(operation)
# Runs the given Queries.getquery request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(
predicate=retry.if_exception_type(Exception),
initial=5,
maximum=60,
deadline=18000)
def check_get_query_completion(getquery_request):
Queries metadata to check for completion.
completion_response = getquery_request.execute()
pprint.pprint(completion_response)
if completion_response['metadata']['running']:
raise Exception('The operation has not completed.')
return completion_response
getquery_request = dbm_service.queries().getquery(queryId=operation['queryId'])
getquery_response = check_get_query_completion(getquery_request)
report_url = getquery_response['metadata'][
'googleCloudStoragePathForLatestReport']
# Use skipfooter to remove report footer from data
report_df = pd.read_csv(report_url, skipfooter=16, engine='python')
report_df.head(10)
# Define our 'KPIs'
ctr_target = 0.15
imp_threshold = 10000
# Convert IDs to remove decimal point, then string
report_df['Line Item ID'] = report_df['Line Item ID'].apply(int)
poor_performers = report_df.query(
'Impressions > @imp_threshold & (Clicks / Impressions)*100 < @ctr_target')
# Convert results to Python list
poor_performers = list(poor_performers['Line Item ID'])
print(f'There are {len(poor_performers)} LineItems with a CTR'
f' < {ctr_target}% and over {imp_threshold} impressions:'
f'\n{poor_performers}')
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.5 Optimisation using Reports
Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note: your new campaign has no performance history, so we'll use an existing campaign for this exercise.
End of explanation
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': ['1914007'],
'version': SDF_VERSION
}
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
for li in poor_performers:
geo = sdf_df.loc[sdf_df['Line Item Id'] == li,
'Geography Targeting - Include'].iloc[0]
sdf_df.loc[sdf_df['Line Item Id'] == li,
'Bid Multipliers'] = f'(geo; {geo} 0.5;);'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update4_bidmultipliers.csv', index=False)
# Display updated DataFrame
sdf_df.head()
files.download('sdf_update4_bidmultipliers.csv')
print('Success, your new SDF file has been downloaded')
Explanation: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5)
End of explanation
#TODO
Explanation: Note the only rows included in the output, are those that we want to modify.
Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.6 Challenge
Challenge: update your campaign with both language and audience targeting.
All Lineitems should target the following Google audiences
Affinity Categories » Technology » Mobile Enthusiasts
Affinity Categories » Technology » Technophiles » High-End Computer Aficionado
In-Market Categories » Consumer Electronics
LineItems for France, should be targeted at French speakers
LineItems for Great Britain, should be targeted at English speakers
Tips
Google Audience IDs can be found in the DV360 UI or by downloading an SDF with an existing audience applied
Language IDs can be found in the Language.json ERF file or by downloading an SDF with the language already applied
End of explanation
# Format today-2 in required date format
yesterday = (datetime.date.today() - datetime.timedelta(2)).strftime('%Y%m%d')
# Download ERF for Language.json from public GCS bucket
request = gcs_service.objects().get_media(
bucket='gdbm-public', object='entity/' + yesterday + '.0.Language.json')
response = request.execute()
languages = json.loads(response)
language_targets = ['en', 'fr']
lang_targeting_ids = {}
# Search language.json for language targets 'en' and 'fr'
for lang in languages:
if lang['code'] in language_targets:
lang_targeting_ids[lang['code']] = lang['id']
print(lang)
print(lang_targeting_ids)
# Define targeting template
targeting_template = {
'Affinity & In Market Targeting - Include': '4569529;4586809;4497529;',
}
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
# Update DataFrame with Language and Audience targeting
sdf_df.loc[sdf_df.Name.str.contains('GB'),
'Language Targeting - Include'] = f"{lang_targeting_ids['en']};"
sdf_df.loc[sdf_df.Name.str.contains('FR'),
'Language Targeting - Include'] = f"{lang_targeting_ids['fr']};"
sdf_df['Affinity & In Market Targeting - Include'] = targeting_template[
'Affinity & In Market Targeting - Include']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update5_challenge.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download file to disk using Colab syntax
files.download('sdf_update5_challenge.csv')
print("Success, check your downloads for a file called 'sdf_update5_challenge.csv'")
Explanation: Solution
End of explanation
def download_sdf(request_body):
Download sdf .zip, extract .csv files, load 'SDF-LineItems.csv' to Pandas DataFrame.
# Create the sdfdownloadtask
sdf_operation = display_video_service.sdfdownloadtasks().create(
body=sdf_body).execute()
print(f'Operation {sdf_operation["name"]} was created.')
# Configure the operations.get request
get_request = display_video_service.sdfdownloadtasks().operations().get(
name=sdf_operation['name'])
# Runs the given operations.get request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(predicate=retry.if_exception_type(Exception),
initial=5, maximum=60, deadline=18000)
def check_sdf_downloadtask_completion(get_request):
operation = get_request.execute()
if 'done' not in operation:
raise Exception('The operation has not completed.')
return operation
# Get current status of operation with exponential backoff retry logic
operation = check_sdf_downloadtask_completion(get_request)
# Check if the operation finished with an error and return
if 'error' in operation:
raise Exception(f'The operation finished in error with code {operation["error"]["code"]} {operation["error"]["message"]}')
print('The operation completed successfully.')
print('Resource {operation["response"]["resourceName"]} was created.')
# Extract download file resource name to use in download request
resource_name = operation['response']['resourceName']
# Configure the Media.download request
dowload_request = display_video_service.media().download_media(
resourceName=resource_name)
output_file = f"{resource_name.replace('/','-')}.zip"
# Create output stream for downloaded file
outstream = io.FileIO(output_file, mode='wb')
# Make downloader object
downloader = http.MediaIoBaseDownload(outstream, dowload_request)
# Download media file in chunks until finished
download_finished = False
while download_finished is False:
_, download_finished = downloader.next_chunk()
print(f'File downloaded to {output_file}')
# Load output into a Pandas dataframe
df = pd.read_csv(output_file, compression='zip')
return df
print('Download SDF function created')
Explanation: Upload the output .csv file in the DV360 UI
1B) SDF using DV360 API
Reference: https://developers.google.com/display-video/api/reference/rest/v1/sdfdownloadtasks/create
Structured Data Files (SDF) are a way of using spreadsheets to make bulk changes to DV360 entities, including Campaigns, Insertion Orders, Line Items, TrueView Ad Groups, TrueView Ads and deals. SDF are the first step on the path to full automation in DV360, but only allow you to automate so far, as we'll explore now...
1.1 Manually create SDF
Create a copy of the Google Store product feed
Update the highlighted cells (B2:B3) on the tab called "sdf_insertionorders"
Save the updated "sdf_insertionorders" tab and "sdf_lineitems" tab to .CSV (File >> Download >> CSV)
Upload the two .CSV files together in the DV360 UI
This will create a very basic campaign, with 2 insertion orders, and 10 lineitems per insertion order.
1.2 Editing SDF programmatically
Our new LineItems are missing some important targeting and inventory controls:
* Channels (e.g. groups of publisher URLs)
* Inventory source
* Brand safety
* Geo targeting
Let’s use software to make these changes for us...
Create a function to download SDFs
As we'll be downloading multiple SDF files in the next exercises, we've created a function to handle to the download process for us.
End of explanation
targeting_template = {
'Channel Targeting - Include':
'2580510;',
'Channel Targeting - Exclude':
'2580509;',
'Inventory Source Targeting - Include':
'1;',
'Inventory Source Targeting - Exclude':
'6; 8; 9; 10; 2; 11; 12; 13; 16; 20; 23; 27; 29; 30; 31; 34; 35; 36; '
'38; 43; 46; 50; 51; 56; 60; 63; 67; 74;',
'Digital Content Labels - Exclude':
'G; PG; T;',
'Brand Safety Sensitivity Setting':
'Use custom',
'Brand Safety Custom Settings':
'Adult; Alcohol; Derogatory; Downloads & Sharing; Drugs; Gambling; '
'Politics; Profanity; Religion; Sensitive social issues; Suggestive; '
'Tobacco; Tragedy; Transportation Accidents; Violence; Weapons;'
}
Explanation: Define a boilerplate targeting template that all Line Items should adhere too
End of explanation
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
# Fetch updated SDF lineitem
sdf_df = download_sdf(sdf_body)
# Overwrite targeting columns using 'targeting_template'
sdf_df['Channel Targeting - Include'] = targeting_template[
'Channel Targeting - Include']
sdf_df['Channel Targeting - Exclude'] = targeting_template[
'Channel Targeting - Exclude']
sdf_df['Inventory Source Targeting - Include'] = targeting_template[
'Inventory Source Targeting - Include']
sdf_df['Inventory Source Targeting - Exclude'] = targeting_template[
'Inventory Source Targeting - Exclude']
sdf_df['Digital Content Labels - Exclude'] = targeting_template[
'Digital Content Labels - Exclude']
sdf_df['Brand Safety Sensitivity Setting'] = targeting_template[
'Brand Safety Sensitivity Setting']
sdf_df['Brand Safety Custom Settings'] = targeting_template[
'Brand Safety Custom Settings']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update1_controls.csv', index=False)
# Show sample (5 rows) of DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update1_controls.csv')
print(
"Success, check your downloads for a file called 'sdf_update1_controls.csv'"
)
Explanation: Modify latest SDF LineItems file and update the columns according to the targeting template
End of explanation
# Actually today-7 to avoid issues with collection
yesterday = datetime.date.today() - datetime.timedelta(7)
# Download public ERF for geolocation info
request = gcs_service.objects().get_media(
bucket='gdbm-public',
object='entity/' + yesterday.strftime('%Y%m%d') + '.0.GeoLocation.json')
response = request.execute()
geolocations = json.loads(response)
print('GeoLocation.json successfully downloaded \n')
print("Here's a random sample of 5 entries:\n")
pprint.pprint(geolocations[0:5])
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.3 SDF + Entity Read Files
What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date.
ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets:
Public (10 .JSON files) - contain common public data such as GeoLocation and Language which are stored in the gdbm-public bucket (the same bucket for every DV360 user).
Private (13 .JSON files) - contain information about the DV360 Partner's campaigns, creatives, budgets and other private data and are stored in Partner-specific buckets (restricted to specific users)
Reference: https://developers.google.com/bid-manager/guides/entity-read/overview
ERFs can be used to speed up, and automate, the creation of SDF files.
Let's explore this now...
Download yesterday's GeoLocation.json from public ERF bucket using Google Cloud Storage API
End of explanation
# Provide a list of store locations
store_locations = ['United Kingdom', 'France', 'Spain', 'Germany', 'Portugal']
# Create a new dictionary to save the country code and ID later on
geo_targeting_ids = {}
# Note: GeoLocation.json is over 800,000 lines
for location in geolocations:
if location['canonical_name'] in store_locations:
geo_targeting_ids[location['country_code']] = location['id']
print(location)
print(geo_targeting_ids)
Explanation: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations
End of explanation
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
sdf_df = download_sdf(sdf_body)
sdf_df.head()
Explanation: Download the latest SDF LineItems (because we've made changes since our last download)
End of explanation
for country in geo_targeting_ids:
target_country = geo_targeting_ids[country]
sdf_df.loc[sdf_df.Name.str.contains(country),
'Geography Targeting - Include'] = f'{target_country};'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update2_geo.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update2_geo.csv')
print("Success, see file 'sdf_update2_geo.csv' in your downloads folder")
Explanation: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs
End of explanation
# Upload product feed using Colab's upload utility
product_feed_csv = files.upload()
contents = next(iter(product_feed_csv.values())).decode('utf-8')
products = csv.DictReader(io.StringIO(contents))
image_url_list = {}
# Iterate through each row and update dict() with sku:link
for row in products:
image_url_list[row['sku']] = row['image_link']
pprint.pprint(image_url_list)
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.4 SDF + Cloud Vision API
Next, let's look at how we you can utilise external APIs.
Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV)
Execute the following code block and upload 'product_feed.csv'
This will create a new Python dictionary (key:value pairing), mapping SKUs with their image link
Warning: Cloud Vision API is paid product, utilising the following example in your own Cloud project will incur costs.
Try out the Cloud Vision API for free at cloud.google.com/vision
End of explanation
def vision_analysis(image_url):
Process images using the Cloud Vision API.
# Assign image URL
image = vision.Image()
image.source.image_uri = image_url
# Instantiates a Vision client
client = vision.ImageAnnotatorClient(credentials=credentials)
# Performs label detection on the image file
response = client.label_detection(image=image)
dv360_targeting_keywords = []
vision_labels = []
for label in response.label_annotations:
dv360_targeting_keywords.append(label.description)
label = f'{label.description} ({label.score:.2%})'
vision_labels.append(label)
return dv360_targeting_keywords, vision_labels
print("Vision function created")
Explanation: Define a function to send images to the Cloud Vision API
End of explanation
imageslookup = {}
for sku, url in image_url_list.items():
imageslookup[sku], vision_labels = vision_analysis(url)
print(f'Analysis completed for: {url}')
print('Labels (confidence score):')
pprint.pprint(vision_labels, indent=4)
print('=' * 30)
Explanation: Run our images through the function, and return a lookup table (reference)
End of explanation
print('\n\nLookup table:')
pprint.pprint(imageslookup, indent=4)
Explanation: View the results of our Vision analysis:
End of explanation
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
sdf_df = download_sdf(sdf_body)
sdf_df.head()
Explanation: Download the latest SDF LineItems (because we've made changes since our last download)
End of explanation
for product in imageslookup:
sdf_df.loc[sdf_df.Name.str.contains(product),
'Keyword Targeting - Include'] = ';'.join(
imageslookup[product]).lower()
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update3_keywords.csv', index=False)
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update3_keywords.csv')
print("Success, see 'sdf_update3_keywords.csv' in your downloads folder")
Explanation: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field
End of explanation
# Define DV360 report definition (i.e. metrics and filters)
report_definition = {
'params': {
'type': 'TYPE_GENERAL',
'metrics': [
'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR',
'METRIC_REVENUE_ADVERTISER'
],
'groupBys': [
'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM',
'FILTER_ADVERTISER_CURRENCY'
],
'filters': [{
'type': 'FILTER_ADVERTISER',
'value': ADVERTISER_ID
}],
},
'metadata': {
'title': 'DV360 Automation API-generated report',
'dataRange': 'LAST_90_DAYS',
'format': 'csv'
},
'schedule': {
'frequency': 'ONE_TIME'
}
}
# Create new query using report definition
operation = dbm_service.queries().createquery(body=report_definition).execute()
pprint.pprint(operation)
# Runs the given Queries.getquery request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(predicate=retry.if_exception_type(Exception),
initial=5, maximum=60, deadline=18000)
def check_get_query_completion(getquery_request):
response = getquery_request.execute()
pprint.pprint(response)
if response['metadata']['running']:
raise Exception('The operation has not completed.')
return response
getquery_request = dbm_service.queries().getquery(queryId=operation['queryId'])
response = check_get_query_completion(getquery_request)
report_url = response['metadata']['googleCloudStoragePathForLatestReport']
# Use skipfooter to remove report footer from data
report_df = pd.read_csv(report_url, skipfooter=16, engine='python')
report_df.head(10)
# Define our 'KPIs'
ctr_target = 0.15
imp_threshold = 1000
# Convert IDs to remove decimal point, then string
report_df['Line Item ID'] = report_df['Line Item ID'].apply(int)
poor_performers = report_df.query(
'Impressions > @imp_threshold & (Clicks / Impressions)*100 < @ctr_target')
# Convert results to Python list
poor_performers = list(poor_performers['Line Item ID'])
print(f'There are {len(poor_performers)} LineItems with a CTR'
f' < {ctr_target}% and over {imp_threshold} impressions:'
f'\n{poor_performers}')
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.5 Optimisation using Reports
Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note: your new campaign has no performance history, so we'll use an existing campaign for this exercise.
End of explanation
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': ['1914007']
}
}
sdf_df = download_sdf(sdf_body)
sdf_df.head()
for li in poor_performers:
geo = sdf_df.loc[sdf_df['Line Item Id'] == li,
'Geography Targeting - Include'].iloc[0]
sdf_df.loc[sdf_df['Line Item Id'] == li,
'Bid Multipliers'] = f'(geo; {geo} 0.5;);'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update4_bidmultipliers.csv', index=False)
# Display updated DataFrame
sdf_df.head()
files.download('sdf_update4_bidmultipliers.csv')
print('Success, your new SDF file has been downloaded')
Explanation: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5)
End of explanation
#TODO
Explanation: Upload the output .csv file in the DV360 UI
Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details'
1.6 Challenge
Challenge: update your campaign with both language and audience targeting.
All Lineitems should target the following Google audiences
Affinity Categories » Technology » Mobile Enthusiasts
Affinity Categories » Technology » Technophiles » High-End Computer Aficionado
In-Market Categories » Consumer Electronics
LineItems for France, should be targeted at French speakers
LineItems for Great Britain, should be targeted at English speakers
Tips
Google Audience IDs can be found in the DV360 UI or by downloading an SDF with an existing audience applied
Language IDs can be found in the Language.json ERF file or by downloading an SDF with the language already applied
End of explanation
# Format today-7 in required date format
yesterday = (datetime.date.today() - datetime.timedelta(7)).strftime('%Y%m%d')
# Download ERF for Language.json from public GCS bucket
request = gcs_service.objects().get_media(
bucket='gdbm-public', object='entity/' + yesterday + '.0.Language.json')
response = request.execute()
languages = json.loads(response)
language_targets = ['en', 'fr']
lang_targeting_ids = {}
# Search language.json for language targets 'en' and 'fr'
for lang in languages:
if lang['code'] in language_targets:
lang_targeting_ids[lang['code']] = lang['id']
print(lang)
print(lang_targeting_ids)
# Define targeting template
targeting_template = {
'Affinity & In Market Targeting - Include': '4569529;4586809;4497529;',
}
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
sdf_df = download_sdf(sdf_body)
# Update DataFrame with Language and Audience targeting
sdf_df.loc[sdf_df.Name.str.contains('GB'),
'Language Targeting - Include'] = f"{lang_targeting_ids['en']};"
sdf_df.loc[sdf_df.Name.str.contains('FR'),
'Language Targeting - Include'] = f"{lang_targeting_ids['fr']};"
sdf_df['Affinity & In Market Targeting - Include'] = targeting_template[
'Affinity & In Market Targeting - Include']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update5_challenge.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download file to disk using Colab syntax
files.download('sdf_update5_challenge.csv')
print("Success, see downloads folder for file 'sdf_update5_challenge.csv'")
Explanation: Solution
End of explanation
request = display_video_service.advertisers().lineItems().list(
advertiserId=ADVERTISER_ID,
filter='entityStatus="ENTITY_STATUS_ACTIVE"',
pageSize=1
)
response = request.execute()
# Check if response is empty.
if not response:
print('Advertiser has no active Line Items')
else:
pprint.pprint(response['lineItems'])
def get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID):
Returns list of Lineitems with active status.
list_lineitems = display_video_service.advertisers().lineItems().list(
advertiserId=ADVERTISER_ID,
filter=f'entityStatus="ENTITY_STATUS_ACTIVE" AND campaignId="{CAMPAIGN_ID}"',
fields='lineItems(lineItemId,displayName)' # Return only two fields
).execute()
active_lineitems = [li['lineItemId'] for li in list_lineitems['lineItems']]
return active_lineitems
Explanation: Upload the output .csv file in the DV360 UI
2) Display & Video 360 API
What is the Display & Video 360 API? The Display & Video 360 API (formly known as the DV360 Write API) is the programmatic interface for the Display & Video 360 platform. It allows developers to easily and efficiently automate complex Display & Video 360 workflows, such as creating insertion orders and setting targeting options for individual line items.
We'll use it now to build upon the campaign we created earlier using SDF.
Reference: https://developers.google.com/display-video/api/reference/rest
2.1 Campaign builds
Check Advertiser (ADVERTISER_ID) has active Lineitems
End of explanation
# Upload product feed using Colab's upload utility
product_feed_csv = files.upload()
contents = next(iter(product_feed_csv.values())).decode('utf-8')
products = list(csv.DictReader(io.StringIO(contents)))
# Create unique list of country-codes -- set() automatically de dupes
unique_country_codes = set([row['country code'] for row in products])
print(unique_country_codes)
Explanation: Upload the extended feed for Google Store's new territories: Spain, Germany and Portugal.
End of explanation
def create_insertion_order(parent_campaign_id, new_io_name):
Creates a new DV360 insertion order object.
# Define our new Insertion Order boilerplate
new_insertion_order = {
'campaignId': parent_campaign_id,
'displayName': new_io_name, # Define naming convention
'entityStatus': 'ENTITY_STATUS_DRAFT',
'pacing': {
'pacingPeriod': 'PACING_PERIOD_DAILY',
'pacingType': 'PACING_TYPE_EVEN',
'dailyMaxMicros': '1000000' # Equiv to $1 or local currency
},
'frequencyCap': {
'unlimited': False,
'timeUnit': 'TIME_UNIT_MONTHS',
'timeUnitCount': 1,
'maxImpressions': 5
},
'performanceGoal': {
'performanceGoalType': 'PERFORMANCE_GOAL_TYPE_CPC',
'performanceGoalAmountMicros': '1000000', # $1 CPM/CPC target
},
'bidStrategy': {
'fixedBid': {
'bidAmountMicros': '0'
},
},
'budget': {
'automationType':
'INSERTION_ORDER_AUTOMATION_TYPE_NONE',
'budgetUnit':
'BUDGET_UNIT_CURRENCY',
'budgetSegments': [{
'budgetAmountMicros':
'30000000', # Equiv to $30 or local currency
'description': 'My first segment',
'dateRange': {
'startDate': {
'year': year,
'month': month,
'day': day
},
'endDate': {
'year': year_plus30,
'month': month_plus30,
'day': day_plus30
}
}
}]
}
}
# API create() request to generate new Insertion Order
newinsertionorder_request = display_video_service.advertisers(
).insertionOrders().create(
advertiserId=ADVERTISER_ID, body=new_insertion_order).execute()
# Define patch to activate new Insertion Order afer creation
patch = {
'entityStatus': 'ENTITY_STATUS_ACTIVE',
}
# API patch() request
display_video_service.advertisers().insertionOrders().patch(
advertiserId=ADVERTISER_ID,
insertionOrderId=newinsertionorder_request['insertionOrderId'],
updateMask='entityStatus',
body=patch).execute()
print(newinsertionorder_request)
return newinsertionorder_request
print('Insertion Order function created')
Explanation: Create Insertion Order template
Here we're defining a new a function called 'create_insertion_order'. Note: all new Insertion Orders and Line Items created using the DV360 API are created in 'Draft' mode (as a safety mechanism), and must be activated with a second API call, or via the UI (e.g. manually by a trader).
End of explanation
def create_lineitem(parent_io_id, new_li_name):
Creates a new DV360 lineitem object.
# Define our new LineItem boilerplate
new_lineitem = {
'advertiserId': ADVERTISER_ID,
'insertionOrderId': parent_io_id,
'displayName': new_li_name, # Define naming convention
'lineItemType': 'LINE_ITEM_TYPE_DISPLAY_DEFAULT',
'entityStatus': 'ENTITY_STATUS_DRAFT',
'flight': {
'flightDateType': 'LINE_ITEM_FLIGHT_DATE_TYPE_INHERITED',
},
'pacing': {
'pacingPeriod': 'PACING_PERIOD_DAILY',
'pacingType': 'PACING_TYPE_EVEN',
'dailyMaxMicros': '1000000'
},
'frequencyCap': {
'timeUnit': 'TIME_UNIT_MONTHS',
'timeUnitCount': 1,
'maxImpressions': 5
},
'partnerRevenueModel': {
'markupType': 'PARTNER_REVENUE_MODEL_MARKUP_TYPE_TOTAL_MEDIA_COST_MARKUP'
},
'budget': {
'budgetAllocationType': 'LINE_ITEM_BUDGET_ALLOCATION_TYPE_UNLIMITED',
'budgetUnit': 'BUDGET_UNIT_CURRENCY'
},
'bidStrategy': {
'fixedBid': {
'bidAmountMicros': '1000000'
}
}
}
# API create() request to generate new Lineitem
newlineitem_request = display_video_service.advertisers().lineItems().create(
advertiserId=ADVERTISER_ID, body=new_lineitem).execute()
# Define patch to activate new Line Item afer creation
patch = {
'entityStatus': 'ENTITY_STATUS_ACTIVE',
}
# API patch() request
display_video_service.advertisers().lineItems().patch(
advertiserId=ADVERTISER_ID,
lineItemId=newlineitem_request['lineItemId'],
updateMask='entityStatus',
body=patch).execute()
print(newlineitem_request)
return newlineitem_request
print('LineItem function created')
Explanation: Create LineItem template
Here we define a new function called 'create_lineitem', based on a template we specified.
Note: the following template does not include any targeting controls by default. Normally, we strongly encourage the addition of targeting before activating a line item.
End of explanation
%%time
for country_code in unique_country_codes:
# Create() and patch() new Insertion Order
io_name = f'Google Store | {country_code} | Display | Prospecting'
insertionorder = create_insertion_order(CAMPAIGN_ID, io_name)
for row in products:
if country_code in row['country code']:
# Create() and patch() new LineItem
li_name = f"{row['country code']} | {row['title']} | {row['sku']}"
lineitem = create_lineitem(insertionorder['insertionOrderId'], li_name)
print('Process completed')
Explanation: Build our new campaign
First, we'll loop through the list of countries generated at the beginning, and for each country, create a new Insertion Order by calling our function 'create_insertion_order'. Within that loop, we find every product that is sold in the corresponding country-code, and create a new Line Item for every matching product using our function 'create_lineitem'.
Sit tight, this one can take a while (~10 mins)...
Link to DV360 UI
End of explanation
# Create the page token variable.
next_page_token = ''
while True:
# Request the targeting options list.
response = display_video_service.targetingTypes().targetingOptions().list(
advertiserId=ADVERTISER_ID,
targetingType='TARGETING_TYPE_BROWSER',
pageToken=next_page_token).execute()
# Check if response is empty.
if not response:
print('List request returned no Targeting Options')
break
# Iterate over retrieved targeting options.
options_dict = {}
for option in response['targetingOptions']:
options_dict[
option['targetingOptionId']] = option['browserDetails']['displayName']
# Break out of loop if there is no next page.
if 'nextPageToken' not in response:
break
# Update the next page token.
next_page_token = response['nextPageToken']
pprint.pprint(options_dict)
Explanation: If successful, the result should look similar to the below in DV360:
2.2 Individual targeting
Reference: https://developers.google.com/display-video/api/guides/managing-line-items/targeting
Retrieve a list of available targeting options using targetingTypes().targetingOptions()
The following example demonstrates retrieving of Browser targeting options only.
The "BrowserDetails" field is only applicable with "TARGETING_TYPE_BROWSER".
End of explanation
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Fetch first Lineitem ID
lineitem_id = active_lineitems[0]
# Create a assigned targeting option object.
assigned_targeting_option_obj = {
'browserDetails': {
'targetingOptionId': '500072'
}
}
# Create the assigned targeting option.
assigned_targeting_option = display_video_service.advertisers().lineItems(
).targetingTypes().assignedTargetingOptions().create(
advertiserId=ADVERTISER_ID,
lineItemId=f'{lineitem_id}',
targetingType='TARGETING_TYPE_BROWSER',
body=assigned_targeting_option_obj
).execute()
# Display the new assigned targeting option.
print(f"Assigned Targeting Option {assigned_targeting_option['name']} created.")
Explanation: Apply individual targeting criteria to single entity
End of explanation
# Create the page token variable.
next_page_token = ''
while True:
# Request the targeting options list.
response = display_video_service.googleAudiences().list(
advertiserId=ADVERTISER_ID,
filter='displayName : "Technology"',
pageToken=next_page_token).execute()
# Check if response is empty.
if not response:
print('List request returned no Targeting Options')
break
# Iterate over retrieved targeting options.
options_dict = {}
for option in response['googleAudiences']:
options_dict[option['googleAudienceId']] = [
option['displayName'], option['googleAudienceType']
]
# Break out of loop if there is no next page.
if 'nextPageToken' not in response:
break
# Update the next page token.
next_page_token = response['nextPageToken']
pprint.pprint(response)
google_audience_id = '92948'
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Create a assigned targeting option object.
assigned_targeting_option_obj = {
'audienceGroupDetails': {
'includedGoogleAudienceGroup': {
'settings': [{
'googleAudienceId': f'{google_audience_id}'
}]
}
}
}
pprint.pprint(assigned_targeting_option_obj)
# Update bulk targeting
for li in active_lineitems:
# Create the assigned targeting option.
assigned_targeting_option = display_video_service.advertisers().lineItems(
).targetingTypes().assignedTargetingOptions().create(
advertiserId=ADVERTISER_ID,
lineItemId=f'{li}',
targetingType='TARGETING_TYPE_AUDIENCE_GROUP',
body=assigned_targeting_option_obj).execute()
# Display the new assigned targeting option.
print(f"Targeting Option {assigned_targeting_option['name']} created.")
Explanation: Applying individual targeting criteria to multiple entities
End of explanation
def set_default_li_targeting(lineitem_id):
Sets default LineItem targeting according to standard template.
# Define 'Channels'
create_channel_assigned_targetingoptions = []
for targeting_id in ['1777746835', '1778039430']:
create_channel_assigned_targetingoptions.append(
{'channelDetails': {
'channelId': targeting_id,
'negative': False
}})
# Define 'Inventory'
create_inventory_assigned_targetingoptions = []
for targeting_id in ['1']:
create_inventory_assigned_targetingoptions.append(
{'inventorySourceDetails': {'inventorySourceId': targeting_id}}
)
# Define 'Sensitive categories'
create_sensitive_cat_assigned_targetingoptions = []
sensitive_category = [
'1163177997', '1163178297', '118521027123', '118521027843',
'118521028083', '118521028563', '118521028803', '1596254697'
]
for targeting_id in sensitive_category:
create_sensitive_cat_assigned_targetingoptions.append({
'sensitiveCategoryExclusionDetails': {
'excludedTargetingOptionId': targeting_id
}
})
# Define 'Digital content labels'
create_digital_content_assigned_targetingoptions = []
content_rating_tier = ['19875634320', '19875634200', '19875634080']
for targeting_id in content_rating_tier:
create_digital_content_assigned_targetingoptions.append({
'digitalContentLabelExclusionDetails': {
'excludedTargetingOptionId': targeting_id
}
})
# Contruct request
bulk_edit_line_item_request = {
'createRequests': [
{
'targetingType':
'TARGETING_TYPE_CHANNEL',
'assignedTargetingOptions': [
create_channel_assigned_targetingoptions
]
},
{
'targetingType':
'TARGETING_TYPE_INVENTORY_SOURCE',
'assignedTargetingOptions': [
create_inventory_assigned_targetingoptions
]
},
{
'targetingType':
'TARGETING_TYPE_SENSITIVE_CATEGORY_EXCLUSION',
'assignedTargetingOptions': [
create_sensitive_cat_assigned_targetingoptions
]
},
{
'targetingType':
'TARGETING_TYPE_DIGITAL_CONTENT_LABEL_EXCLUSION',
'assignedTargetingOptions': [
create_digital_content_assigned_targetingoptions
]
},
]
}
# Edit the line item targeting.
bulk_request = display_video_service.advertisers().lineItems(
).bulkEditLineItemAssignedTargetingOptions(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id,
body=bulk_edit_line_item_request
)
bulk_response = bulk_request.execute()
# Check if response is empty.
# If not, iterate over and display new assigned targeting options.
if not bulk_response:
print('Bulk edit request created no new AssignedTargetingOptions')
else:
for assigned_targeting_option in bulk_response[
'createdAssignedTargetingOptions']:
print(f"Targeting Option {assigned_targeting_option['name']} created.")
print('Lineitem targeting function created')
Explanation: 2.3 Bulk targeting
Bulk updates using templated targeting controls
End of explanation
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Update bulk targeting
for li in active_lineitems:
set_default_li_targeting(li)
Explanation: Retrieve list of active LineItems, and Apply bulk targeting
End of explanation
def optimise_lineitem(lineitem_id, action):
Optimises lineitem according to given parameter.
lineitem_object = display_video_service.advertisers().lineItems().get(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id).execute()
if lineitem_object['entityStatus'] == 'ENTITY_STATUS_ACTIVE':
if action == 'pause':
patch = {
'entityStatus': 'ENTITY_STATUS_PAUSED',
}
lineitem_patched = display_video_service.advertisers().lineItems().patch(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id,
updateMask='entityStatus',
body=patch).execute()
print(f"LineItemID {lineitem_patched['name']} was paused")
elif action == 'optimise':
patch = {'bidStrategy': {'fixedBid': {'bidAmountMicros': '500000'},}}
lineitem_patched = display_video_service.advertisers().lineItems().patch(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id,
updateMask='bidStrategy',
body=patch).execute()
print(f"{lineitem_patched['name']} was optimised")
else:
print("Not a valid action, must be either 'pause' or 'optimise'")
else:
print(
f"{lineitem_object['name']} already paused/archived - no action taken")
print('Optimisation function created')
Explanation: 2.4 Optimisation (external trigger)
The following optimisations will be completed on your campaign, created earlier.
Create functions to 'deactivate' or 'optimise' Lineitems
End of explanation
out_of_stock_list = []
products = csv.DictReader(io.StringIO(contents))
# Iterate through each row, checking for products where availability = 0
for row in products:
if row['availability'] == '0':
out_of_stock_list.append(row['sku'])
# This should generate a list of 9 SKUs that are no-longer in stock
print(
f'Found {len(out_of_stock_list)} out-of-stock products {out_of_stock_list}')
Explanation: Creat list of out of stock products
End of explanation
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Iterate through out-of-stock list. If sku is found in lineitem's name, perform optimisation.
for product in out_of_stock_list:
for key, value in active_lineitems.items():
if product in key:
optimise_lineitem(value, 'pause')
Explanation: Process optimisation
End of explanation
# Define DV360 report definition (i.e. metrics and filters)
report_definition = {
'params': {
'type': 'TYPE_GENERAL',
'metrics': [
'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR',
'METRIC_REVENUE_ADVERTISER'
],
'groupBys': [
'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM',
'FILTER_ADVERTISER_CURRENCY'
],
'filters': [{
'type': 'FILTER_ADVERTISER',
'value': ADVERTISER_ID
}],
},
'metadata': {
'title': 'DV360 Automation API-generated report',
'dataRange': 'LAST_90_DAYS',
'format': 'csv'
},
'schedule': {
'frequency': 'ONE_TIME'
}
}
# Create new query using report definition
operation = dbm_service.queries().createquery(body=report_definition).execute()
pprint.pprint(operation)
# Runs the given Queries.getquery request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(
predicate=retry.if_exception_type(Exception),
initial=5,
maximum=60,
deadline=18000)
def check_get_query_completion(getquery_request):
response = getquery_request.execute()
pprint.pprint(response)
if response['metadata']['running']:
raise Exception('The operation has not completed.')
return response
getquery_request = dbm_service.queries().getquery(queryId=operation['queryId'])
response = check_get_query_completion(getquery_request)
Explanation: 2.5 Optimisation (reporting data)
As your new campaign has no performance data, the following optimisations will be completed on an existing campaign with historical data.
Create new performance report and fetch results
End of explanation
# Capture report URL from response
report_url = response['metadata']['googleCloudStoragePathForLatestReport']
# Use skipfooter to remove report footer from data
report_df = pd.read_csv(report_url, skipfooter=16, engine='python')
report_df.head(10)
Explanation: Load report to Pandas DataFrame
End of explanation
# Define our 'KPIs'
ctr_to_pause = 0.1
ctr_to_optimise = 0.3
imp_threshold = 5000
# Convert IDs to remove decimal point, then string
report_df['Line Item ID'] = report_df['Line Item ID'].apply(int)
lineitems_to_pause = report_df.query('Impressions > @imp_threshold and (Clicks / Impressions)*100 < @ctr_to_pause')
lineitems_to_reducebid = report_df.query('Impressions > @imp_threshold and (Clicks / Impressions)*100 > @ctr_to_pause < @ctr_to_optimise')
# Convert results to Python list
lineitems_to_pause = list(lineitems_to_pause['Line Item ID'])
lineitems_to_reducebid = list(lineitems_to_reducebid['Line Item ID'])
print(f'Found {len(lineitems_to_pause)} LineItems with a CTR'
f'< {ctr_to_pause}% and > {imp_threshold} impressions:'
f'{lineitems_to_pause}')
print(f'Found {len(lineitems_to_reducebid)} LineItems with a CTR'
f' between {ctr_to_pause}%-{ctr_to_optimise}%, and > {imp_threshold}'
f'\n impressions: {lineitems_to_reducebid}')
Explanation: Create two lists of poorly performing LineItems
1. LineItems that should be paused
2. Lineitems to reduce bids
End of explanation
%%time
if lineitems_to_pause:
for lineitem in lineitems_to_pause:
optimise_lineitem(str(lineitem), 'pause')
if lineitems_to_reducebid:
for lineitem in lineitems_to_reducebid:
optimise_lineitem(str(lineitem), 'optimise')
print('Optimisation completed')
Explanation: Process optimisation
End of explanation
def upload_creative_image_asset(asset_url, click_url):
Creates a new DV360 creative object.
# Fetch asset from cloud storage using requests library
asset = requests.get(asset_url)
# Create upload object from http image url
fh = io.BytesIO(asset.content)
media_body = http.MediaIoBaseUpload(fh, mimetype='image/png',
chunksize=1024*1024, resumable=True)
# Extract filename from url path
filename = str(asset_url.rsplit(sep='/', maxsplit=1)[1])
# Create the request body
body = {'filename': filename}
# Upload the asset
asset_request = display_video_service.advertisers().assets().upload(
advertiserId=ADVERTISER_ID, body=body, media_body=media_body).execute()
# Display the new asset media ID
print(f"Asset was created with media ID {asset_request['asset']['mediaId']}")
display_name = f'{filename}'.split(sep='.')[0].lower() + ' 300x250'
# Create a creative object.
creative_obj = {
'displayName':
f'{display_name}',
'entityStatus':
'ENTITY_STATUS_ACTIVE',
'creativeType':
'CREATIVE_TYPE_STANDARD',
'hostingSource':
'HOSTING_SOURCE_HOSTED',
'dimensions': {
'widthPixels': 300,
'heightPixels': 250
},
'assets': [{
'asset': {
'mediaId': asset_request['asset']['mediaId']
},
'role': 'ASSET_ROLE_MAIN'
}],
'exitEvents': [{
'type': 'EXIT_EVENT_TYPE_DEFAULT',
'url': f'{click_url}',
}]
}
creative_request = display_video_service.advertisers().creatives().create(
advertiserId=ADVERTISER_ID,
body=creative_obj
).execute()
# Display the new creative ID
print(f"Creative was created with ID {creative_request['creativeId']}"
f" and DisplayName '{creative_request['displayName']}'")
pprint.pprint(creative_request)
print('Creative upload function defined')
Explanation: 2.6 Creative upload
Uploading Display creatives from remote storage (http)
The following demonstrates how to upload image assets from remote storage, but it's also possible to upload from local storage.
Reference: https://developers.google.com/display-video/api/guides/creating-creatives/overview
End of explanation
image_assets = {
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/pixelbook.png?raw=true':
'https://store.google.com/product/google_pixelbook',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/googlehome.png?raw=true':
'https://store.google.com/product/google_home_hub',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/googlehomemini.png?raw=true':
'https://store.google.com/product/google_home_mini',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/pixel2.png?raw=true':
'https://store.google.com/product/pixel_2',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/chromecastultra.png?raw=true':
'https://store.google.com/product/chromecast_ultra'
}
for asset, click_url in image_assets.items():
upload_creative_image_asset(asset, click_url)
Explanation: Upload image creatives
Note, all of the following assets are the same dimension (300x250) and type 'CREATIVE_TYPE_STANDARD'.
When uploading assets of multiple sizes, the creatives.create body must reflect this.
End of explanation
#TODO
Explanation: 2.7 Challenge
Challenge: build a new campaign for 'Google Airways' using the flights feed provided here.
Tips
You don't need to rewrite any functions, reuse the existing ones
Don't forget to use print() statements to see progress within a for loop
Your final campaign should look similar to the below:
End of explanation
%%time
# Load flight information from CSV file
googleairways_routes = files.upload()
contents = next(iter(googleairways_routes.values())).decode('utf-8')
routes = list(csv.DictReader(io.StringIO(contents)))
# Create a unique set (de-duped) of cities from the routes provided
unique_cities = set()
for row in routes:
unique_cities.add(row['airport-city'])
print(unique_cities)
# Create Campaign and Patch()
new_campaign = create_campaign('Google Airways')
print(new_campaign)
# Step through each city within our unique set of cities
for city in unique_cities:
# Create Insertion Order and Patch()
io_name = f'Flights | {city}'
create_io = create_insertion_order(new_campaign['campaignId'], io_name)
# Step through each route(row) of the CSV upload
for row in routes:
if city == row['airport-city']:
# Create LineItems and Patch()
li_name = f"Flight {row['flightno']} | {row['depairport-city']} to {row['arrairport-city']}"
create_lis = create_lineitem(create_io['insertionOrderId'], li_name)
print('Process completed')
Explanation: Solution
End of explanation
# Exclude following campaigns in the reset process
protected_campaigns = ['1914007','985747']
def reset_demo_account():
Reset DV360 account to earlier state.
print('Resetting DV360 account...')
# Reactivate Campaigns
list_campaigns = display_video_service.advertisers().campaigns().list(
advertiserId=ADVERTISER_ID,
filter='entityStatus="ENTITY_STATUS_ACTIVE"').execute()
results = list_campaigns['campaigns']
print(f'Found {len(results)} active campaigns')
for index, campaign in enumerate(results, start=1):
print(f'Campaign {index} of {len(results)}')
pause_campaign(campaign['campaignId'])
# Reactivate LineItems
list_lineitems = display_video_service.advertisers().lineItems().list(
advertiserId=ADVERTISER_ID,
filter='entityStatus="ENTITY_STATUS_PAUSED" AND campaignId="1914007"'
).execute()
if not list_lineitems:
print('No paused lineitems found')
else:
for index, li in enumerate(list_lineitems['lineItems'], start=1):
print(f"Lineitem {index} of {len(list_lineitems['lineItems'])}")
lineitem_id = li['lineItemId']
activate_lineitem(lineitem_id)
print('Account reset completed')
def delete_campaign(campaign_id):
Updates DV360 campaign object status to deleted.
if campaign_id in protected_campaigns:
print(f'Campaign ID {campaign_id} not deleted (protected campaign)')
else:
try:
display_video_service.advertisers().campaigns().delete(
advertiserId=ADVERTISER_ID, campaignId=campaign_id).execute()
print(f'{campaign_id} successfully deleted')
except Exception:
print('Could not delete campaign')
def archive_campaign(campaign_id):
Updates DV360 campaign object status to archived.
patch = {'entityStatus': 'ENTITY_STATUS_ARCHIVED'}
if campaign_id in protected_campaigns:
print(f'Campaign ID {campaign_id} not archived (protected campaign)')
else:
archive_campaign = display_video_service.advertisers().campaigns().patch(
advertiserId=ADVERTISER_ID,
campaignId=campaign_id,
updateMask='entityStatus',
body=patch).execute()
print(f'Campaign ID {campaign_id} successfully archived')
def pause_campaign(campaign_id):
Updates DV360 campaign object status to paused.
patch = {'entityStatus': 'ENTITY_STATUS_PAUSED'}
if campaign_id in protected_campaigns:
print(f'Campaign ID {campaign_id} not paused (protected campaign)')
else:
display_video_service.advertisers().campaigns().patch(
advertiserId=ADVERTISER_ID,
campaignId=campaign_id,
updateMask='entityStatus',
body=patch).execute()
print(f'Campaign ID {campaign_id} successfully paused')
def activate_lineitem(lineitem_id):
Updates DV360 lineitem object status to active.
patch = {'entityStatus': 'ENTITY_STATUS_ACTIVE'}
display_video_service.advertisers().lineItems().patch(
lineItemId=lineitem_id,
advertiserId=ADVERTISER_ID,
updateMask='entityStatus',
body=patch).execute()
print(f'Lineitem ID {lineitem_id} reactivated')
# @title { display-mode: "form" }
#@markdown Reset DV360 account
# Call main function to intialise reset procedure
reset_demo_account()
Explanation: Link to DV360 UI
Resources
Getting started with SDF in DV360 guide
Structured Data Files (SDF) developer guide
Getting started with the Display & Video 360 API developer guide
Getting started with the DoubleClick Bid Manager API developer guide
How to access Entity Read Files
Quickstart: Setup the Vision API
Please help us improve this workshop by completing the satisfaction survey
Thank you!
Clean up
To clean up all of the DV360 resources used during these exercises, you can run the following script. Warning: this will remove all Campaigns from the DV360 advertiser specified in ADVERTISER_ID, unless they are explicitly defined as a 'protected_campaign'
End of explanation
<END_TASK> |
15,952 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Graphical User Interfaces
Object oriented programming and particularly inheritance is commonly used for creating GUIs. There are a large number of different frameworks supporting building GUIs. The following are particularly relevant
Step1: Although this works, it is visually unappealing. We can improve on this using styles and themes.
Step2: As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site.
Step4: Matplotlib
For simple programs, displaying data and taking basic input, often a command line application will be much faster to implement than a GUI. The times when I have moved away from the command line it has been to interact with image data and plots. Here, matplotlib often works very well. Either it can be embedded in a larger application or it can be used directly.
There are a number of examples on the matplotlib site.
Here is one stripped down example of one recent GUI I have used. | <ASSISTANT_TASK:>
Python Code:
import tkinter as tk
class Application(tk.Frame):
def __init__(self, master=None):
tk.Frame.__init__(self, master)
self.pack()
self.createWidgets()
def createWidgets(self):
self.hi_there = tk.Button(self)
self.hi_there["text"] = "Hello World\n(click me)"
self.hi_there["command"] = self.say_hi
self.hi_there.pack(side="top")
self.QUIT = tk.Button(self, text="QUIT", fg="red",
command=root.destroy)
self.QUIT.pack(side="bottom")
def say_hi(self):
print("hi there, everyone!")
root = tk.Tk()
app = Application(master=root)
app.mainloop()
Explanation: Graphical User Interfaces
Object oriented programming and particularly inheritance is commonly used for creating GUIs. There are a large number of different frameworks supporting building GUIs. The following are particularly relevant:
TkInter - This is the official/default GUI framework
guidata - A GUI framework for dataset display and editing
VTK - A GUI framework for data visualization
pyqtgraph - A GUI framework for data visualization, easily installed with conda install pyqtgraph
matplotlib - As well as creating plots matplotlib can support interaction
TkInter
TkInter is widely used with plenty of documentation available but may prove somewhat limited for more data intensive applications.
Documentation from the standard library
Further documentation from python.org
TkDocs
Let's look at a simple example from the documentation
End of explanation
import tkinter as tk
from tkinter import ttk
class Application(ttk.Frame):
def __init__(self, master=None):
super().__init__(master, padding="3 3 12 12")
self.grid(column=0, row=0, )
self.createWidgets()
self.master.title('Test')
def createWidgets(self):
self.hi_there = ttk.Button(self)
self.hi_there["text"] = "Hello World\n(click me)"
self.hi_there["command"] = self.say_hi
self.QUIT = ttk.Button(self, text="QUIT", style='Alert.TButton', command=root.destroy)
for child in self.winfo_children():
child.grid_configure(padx=10, pady=10)
def say_hi(self):
print("hi there, everyone!")
root = tk.Tk()
app = Application(master=root)
s = ttk.Style()
s.configure('TButton', font='helvetica 24')
s.configure('Alert.TButton', foreground='red')
root.mainloop()
Explanation: Although this works, it is visually unappealing. We can improve on this using styles and themes.
End of explanation
from tkinter import *
from tkinter import ttk
def calculate(*args):
try:
value = float(feet.get())
meters.set((0.3048 * value * 10000.0 + 0.5)/10000.0)
except ValueError:
pass
root = Tk()
root.title("Feet to Meters")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
feet = StringVar()
meters = StringVar()
feet_entry = ttk.Entry(mainframe, width=7, textvariable=feet)
feet_entry.grid(column=2, row=1, sticky=(W, E))
ttk.Label(mainframe, textvariable=meters).grid(column=2, row=2, sticky=(W, E))
ttk.Button(mainframe, text="Calculate", command=calculate).grid(column=3, row=3, sticky=W)
ttk.Label(mainframe, text="feet").grid(column=3, row=1, sticky=W)
ttk.Label(mainframe, text="is equivalent to").grid(column=1, row=2, sticky=E)
ttk.Label(mainframe, text="meters").grid(column=3, row=2, sticky=W)
for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5)
feet_entry.focus()
root.bind('<Return>', calculate)
root.mainloop()
Explanation: As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site.
End of explanation
Do a mouseclick somewhere, move the mouse to some destination, release
the button. This class gives click- and release-events and also draws
a line or a box from the click-point to the actual mouseposition
(within the same axes) until the button is released. Within the
method 'self.ignore()' it is checked wether the button from eventpress
and eventrelease are the same.
from matplotlib.widgets import RectangleSelector
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
def line_select_callback(eclick, erelease):
'eclick and erelease are the press and release events'
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
print ("(%3.2f, %3.2f) --> (%3.2f, %3.2f)" % (x1, y1, x2, y2))
print (" The button you used were: %s %s" % (eclick.button, erelease.button))
def toggle_selector(event):
print (' Key pressed.')
if event.key in ['Q', 'q'] and toggle_selector.RS.active:
print (' RectangleSelector deactivated.')
toggle_selector.RS.set_active(False)
if event.key in ['A', 'a'] and not toggle_selector.RS.active:
print (' RectangleSelector activated.')
toggle_selector.RS.set_active(True)
image_file = cbook.get_sample_data('grace_hopper.png')
image = plt.imread(image_file)
fig, current_ax = plt.subplots()
plt.imshow(image)
toggle_selector.RS = RectangleSelector(current_ax,
line_select_callback,
drawtype='box', useblit=True,
button=[1,3], # don't use middle button
minspanx=5, minspany=5,
spancoords='pixels')
plt.connect('key_press_event', toggle_selector)
plt.show()
Explanation: Matplotlib
For simple programs, displaying data and taking basic input, often a command line application will be much faster to implement than a GUI. The times when I have moved away from the command line it has been to interact with image data and plots. Here, matplotlib often works very well. Either it can be embedded in a larger application or it can be used directly.
There are a number of examples on the matplotlib site.
Here is one stripped down example of one recent GUI I have used.
End of explanation
<END_TASK> |
15,953 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basics of NUmpy & Pandas
Numpy
Numpy uses array whereas pandas used scaler <br />
Step1: Array are similar to python list , but it all element must be of same data type, and it faster than list
Step2: Lets see some of functionality
Step3: Vector Operation
Step4: vector [+-*/] Scaler
Step5: vector & boolean vector
Step6: num[bl],, what it will return ??
<h4>It return array of values corresponding to which elemnt in bl is True
Step7: find all elemnt greter than 100 from num
Step8: <h5> All element less than 50 ??
Step9: In-place operation in numpay (Diff between += and +)
Step10: <h5>this happen becouse a and b both point to same array and + operation create a new array and then a point to that so b remain unaffected" </h5>
Step11: Pandas Series
<h4> Basics are same as numpy array but pandas series also contain lots of functionality and speciality
Step12: <h6>See All basic results using describe() function | <ASSISTANT_TASK:>
Python Code:
import numpy as np
Explanation: Basics of NUmpy & Pandas
Numpy
Numpy uses array whereas pandas used scaler <br />
End of explanation
num = np.array([3,4,2,5,7,23,56,23,7,23,89,43,676,43])
num
Explanation: Array are similar to python list , but it all element must be of same data type, and it faster than list
End of explanation
print "Mean :",num.mean()
print "sum :",num.sum()
print "max :",num.max()
print "std :",num.std()
#slicing
num[:5]
#find index of any element let say max
print "index of max :",num.argmax()
print "data Type of array :",num.dtype
Explanation: Lets see some of functionality
End of explanation
a=np.array([5,6,15])
b=np.array([5,4,-5])
# Addition
print "{} + {} = {}".format(a,b,a+b)
print "{} * {} = {}".format(a,b,a*b)
print "{} / {} = {}".format(a,b,a/b)
# If size mismatch then error occure
b=np.array([5,4,-5,5])
print "{} + {} = {}".format(a,b,a+b)
Explanation: Vector Operation
End of explanation
print "{} + {} = {}".format(a,3,a+3)
print "{} * {} = {}".format(a,3,a*3)
print "{} / {} = {}".format(a,3,a/3)
Explanation: vector [+-*/] Scaler
End of explanation
num=np.array([5,6,15,65,32,656,23,435,2,45,21])
bl=np.array([False,True,True,False,True,False,True,False,True,True,False])
num[6]
Explanation: vector & boolean vector
End of explanation
num[bl]
Explanation: num[bl],, what it will return ??
<h4>It return array of values corresponding to which elemnt in bl is True
End of explanation
num[num>100]
Explanation: find all elemnt greter than 100 from num
End of explanation
num[num<50]
Explanation: <h5> All element less than 50 ??
End of explanation
a=np.array([5,6,15])
b=a
a += 2
print b
print "this happen becouse a and b both point to same array and += is In-place operation so it maintain that"
a=np.array([5,6,15])
b=a
a = a + 2
print b
Explanation: In-place operation in numpay (Diff between += and +)
End of explanation
a=np.array([5,6,15])
b=a[:3]
b[0]=1000
print a,"Reason is similar as +="
Explanation: <h5>this happen becouse a and b both point to same array and + operation create a new array and then a point to that so b remain unaffected" </h5>
End of explanation
import pandas as pd
num = pd.Series([3,4,2,5,7,23,56,23,7,23,89,43,676,43])
num
Explanation: Pandas Series
<h4> Basics are same as numpy array but pandas series also contain lots of functionality and speciality
End of explanation
num.describe()
Explanation: <h6>See All basic results using describe() function
End of explanation
<END_TASK> |
15,954 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
NoSQL (MongoDB) (Ejemplo de Replica Set y Sharding)
Esta hoja muestra cómo crear un clúster (simulado con docker-compose donde se mantienen dos replica sets (conjuntos de réplicas) y una colección (posts) que se divide en cada una de esas dos replica sets. Al final se indica cómo escalar y añadir más hosts a las replica sets.
Step1: Usaremos la librería pymongo para python. La cargamos a continuación.
Step2: La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo).
Step3: Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación.
Step4: Creamos el índice por el que se creará el shard. Por defecto es de tipo ASCENDING.
Step5: Añadimos al shard ambos ordenadores de ambos replica sets, y activamos el sharding en la tabla posts.
Step6: | <ASSISTANT_TASK:>
Python Code:
!pip install --upgrade pymongo
from pprint import pprint as pp
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
%%bash
sudo apt-get update
sudo apt-get install -y mongodb-clients
Explanation: NoSQL (MongoDB) (Ejemplo de Replica Set y Sharding)
Esta hoja muestra cómo crear un clúster (simulado con docker-compose donde se mantienen dos replica sets (conjuntos de réplicas) y una colección (posts) que se divide en cada una de esas dos replica sets. Al final se indica cómo escalar y añadir más hosts a las replica sets.
End of explanation
%%bash
cat << END | mongo --host mongo_mongocfg1_1
rs.initiate(); // Iniciar el replica set de los servidores de configuración
sleep(1000);
rs.status()
END
%%bash
cat <<END | mongo --host mongo_mongors1_1
rs.initiate(); // Inicio del RS
sleep(1000);
cfg = rs.conf();
cfg.members[0].host = "mongo_mongors1_1"; // Cambiar el host porque docker coge el hostname del contenedor
rs.reconfig(cfg); // Reconfigurar
//rs.add(host2/3) // Añadir posibles nuevos hosts añadidos al RS
rs.status()
END
%%bash
cat <<END | mongo --host mongo_mongors2_1
rs.initiate();
sleep(1000);
cfg = rs.conf();
cfg.members[0].host = "mongo_mongors2_1";
rs.reconfig(cfg);
//rs.add(host2/3)
rs.status()
END
import pymongo
from pymongo import MongoClient
Explanation: Usaremos la librería pymongo para python. La cargamos a continuación.
End of explanation
client = MongoClient("mongo_mongors1_1",27017)
client
client.database_names()
%%bash
file=../Posts.csv
test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file
db = client.stackoverflow
db = client['stackoverflow']
db
posts = db.posts
posts
Explanation: La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo).
End of explanation
%%bash
mongoimport --db stackoverflow --collection posts --drop --type csv \
--headerline --host=mongo_mongors1_1 --file ../Posts.csv
posts.count()
Explanation: Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación.
End of explanation
posts.create_index('Id')
Explanation: Creamos el índice por el que se creará el shard. Por defecto es de tipo ASCENDING.
End of explanation
%%bash
cat <<END | mongo --host mongo_mongos_1
sh.addShard("mongors1/mongo_mongors1_1:27017");
sh.addShard("mongors2/mongo_mongors2_1:27017");
sh.enableSharding("stackoverflow");
sh.shardCollection('stackoverflow.posts', {'Id': 1})
END
%%bash
cat <<END | mongo --host mongo_mongos_1
sh.status()
END
Explanation: Añadimos al shard ambos ordenadores de ambos replica sets, y activamos el sharding en la tabla posts.
End of explanation
post = posts.find_one()
post
posts.find({"PostTypeId": 2}).explain()
Explanation:
End of explanation
<END_TASK> |
15,955 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using a Pre-trained PyTorch Model for Inference
In this demo, we will use a pre-trained model to perform inference on a single image.
There are 3 components to this demo
Step1: Model
Step2: Input
Step3: Output
Step4: Human | <ASSISTANT_TASK:>
Python Code:
import torch
import torchvision
import torchvision.transforms as transforms
import timm
from einops import rearrange
from PIL import Image
Explanation: Using a Pre-trained PyTorch Model for Inference
In this demo, we will use a pre-trained model to perform inference on a single image.
There are 3 components to this demo:
1. Input
2. Model
3. Output
We will cover these components in detail below.
Let us first import the required packages.
End of explanation
use_timm = False
# Download and load the pretrained ResNet-18.
if use_timm:
resnet = timm.create_model('resnet18', pretrained=True)
else:
resnet = torchvision.models.resnet18(pretrained=True)
resnet.eval()
Explanation: Model: Loading a pre-trained ResNet18 model
We use a pre-trained ResNet18 model for inference. The model is available from torchvision or from timm.
When we use a model for inference, we need to specify the eval mode. This is because the model in train mode by default, and we need to disable all the dropout layers.
End of explanation
filename = input()
# Load a PIL Image given a file name from the current directory.
img = Image.open(filename)
# Display the loaded image on notebook.
display(img)
# Resize the image to 256x256.
# Then crop the center square of the image.
# Next, convert the image to a PyTorch Tensor.
# Lastly, normalize the image so that it has mean and standard deviation as shown below.
# Reference for image transforms: https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
# PIL image undergoes transforms.
img = transform(img)
# A simplified version is to simply transform the image to a tensor
#img = transforms.ToTensor()(img)
# Check img type and shape
print("Type:", img.dtype)
print("Shape:", img.shape)
Explanation: Input: Loading an input image
We can use matplotlib image to load an image into a numpy array.
However, PyTorch transforms expects a PIL image. While we can convert numpy array to PIL, we can load an image directly into a PIL image.
End of explanation
# We need the tensor to have a batch dimension of 1.
img = rearrange(img, 'c h w -> 1 c h w')
print("New shape:", img.shape)
with torch.no_grad():
pred = resnet(img)
print("Prediction shape:", pred.shape)
pred = torch.argmax(pred, dim=1)
print("Predicted index", pred)
Explanation: Output: Making a prediction
We will now use img tensor as input to the pre-trained resnet18 model.
Before running the model for prediction, there are 2 things that we should do:
Include a batch dimension. In this case, we are using a single image, so we need to add a batch size of 1. We use rearrange for this.
Execute inference within torch.no_grad() context manager. This is because we do not want to track the gradients.
The expected output is a torch.Tensor of shape (1, 1000). resnet18 was pre-trained on ImageNet1k. We can use torch.argmax to get the index of the maximum value.
End of explanation
import urllib
filename = "imagenet1000_labels.txt"
url = "https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt"
# Download the file if it does not exist
if not os.path.isfile(filename):
urllib.request.urlretrieve(url, filename)
with open(filename) as f:
idx2label = eval(f.read())
print("Predicted label:", idx2label[pred.cpu().numpy()[0]])
Explanation: Human: Convert class index to label
To make sense of the predicted index, we need to convert it to a label. We can use https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a to get the mapping from index to label.
End of explanation
<END_TASK> |
15,956 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sweden
Step1: 2. Join data to identify common water samples
The recent data from Salar includes more water samples than are currently in RESA2. In addition, it looks as though the sample depths in RESA2 might be incorrect. As a first step, let's see if we can match water samples based on sample date and station codes.
Step2: Next. check how much of the data downloaded by Salar is already in the database.
Step3: So, the recent data downloaded by Salar includes 2412 records for these 25 stations, of which 1910 are already in RESA2. Let's check if the values for Mg are comparable.
Step4: Excellent - the values in the RESA2 database for Mg match the values in the data Salar has recently downloaded.
However, the sample depths do not match
Step5: Next, get the sample IDs associated with these 25 sites and join them to the data above. | <ASSISTANT_TASK:>
Python Code:
# Read data
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Swedish_Ca_Data\Missing_Data_25_Swedish_Sites.xlsx')
smhi_df = pd.read_excel(in_xls, sheetname='salar_data')
resa_df = pd.read_excel(in_xls, sheetname='from_resa_10-02-2017')
# Get just the cols of interest
smhi_df = smhi_df[['stn_code', 'stn_name', 'date', 'depth_m', 'ca_mekv/l', 'mg_mekv/l']]
resa_df = resa_df[['stn_code', 'stn_name', 'date', 'depth1', 'depth2', 'ca_uekv/l', 'mg_uekv/l']]
smhi_df.head(10)
resa_df.head(10)
Explanation: Sweden: missing Ca data
Heleen has discovered that Ca data are missing from RESA2 for 25 Swedish sites in the TOC trends analysis - see e-mail received 07/02/2017 at 16.12. The relevant background data is gathered together in this spreadsheet:
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Swedish_Ca_Data\Missing_Data_25_Swedish_Sites.xlsx
The 25 sites (with their RESA2 site codes) are listed in the resa2_stns_missing_data sheet. Salar has extracted the data for these sites from the SMHI database and I have copied this to the salar_data sheet and simplified the column headings (see e-mail from Heleen received 09/02/2017 at 10.35 for the original). The sheet from_resa_10-02-2017 shows all the Mg and Ca data from the Swedish sites currently associated with the ICPW_TOCTRENDS_2015_SE project (as of 10/02/2017, before any errors were corrected). This should make it possible to identify which water samples are already in the database.
1. Read data
End of explanation
# Join datasets
df = pd.merge(resa_df, smhi_df, how='inner',
left_on=['stn_code', 'date'],
right_on=['stn_code', 'date'])
df.head()
Explanation: 2. Join data to identify common water samples
The recent data from Salar includes more water samples than are currently in RESA2. In addition, it looks as though the sample depths in RESA2 might be incorrect. As a first step, let's see if we can match water samples based on sample date and station codes.
End of explanation
print len(smhi_df)
print len(df)
Explanation: Next. check how much of the data downloaded by Salar is already in the database.
End of explanation
# Convert Mg to ueq/l and round to nearest whole number
df['mg_old_ueq/l'] = (df['mg_uekv/l'] + 0.5).astype(int)
df['mg_new_ueq/l'] = (df['mg_mekv/l']*1000 + 0.5).astype(int)
print 'All Mg values match:', (df['mg_old_ueq/l'] == df['mg_new_ueq/l']).all()
df.head()
Explanation: So, the recent data downloaded by Salar includes 2412 records for these 25 stations, of which 1910 are already in RESA2. Let's check if the values for Mg are comparable.
End of explanation
# Connect to db
r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py'
resa2 = imp.load_source('useful_resa2_code', r2_func_path)
engine, conn = resa2.connect_to_resa2()
Explanation: Excellent - the values in the RESA2 database for Mg match the values in the data Salar has recently downloaded.
However, the sample depths do not match: all the samples in RESA2 are assumed to have come from the surface (depth1=0 and depth2=0), whereas Salar's data indicates they actually come from a variety of depths between 0.5 and 2 m. In the trend analysis so far, we've focused on samples taken within the upper 0.5 m of the water column. How much does this matter? If it is important, we'll need to correct the sample depths for all the Swedish sites, which will involve Salar downloading some more data followed by a bit of additonal work from me.
3. Add Ca data
Heleen has replied to my e-mail to say not to worry about the water depths - just add the new Ca data to the existing water sample IDs (see e-mail received 10/02/2017 at 16.19).
First, connect to the database.
End of explanation
# Get all water samples for these sites
stns = tuple(df['stn_code'].unique())
sql = ('SELECT station_id, sample_date, water_sample_id '
'FROM resa2.water_samples '
'WHERE station_id IN %s' % str(stns))
samp_df = pd.read_sql_query(sql, engine)
# Join
df = pd.merge(df, samp_df, how='left',
left_on=['date', 'stn_code'],
right_on=['sample_date', 'station_id'])
df.head()
# Extract columns of interest
df = df[['water_sample_id', 'ca_mekv/l']]
# Rename columns
df.columns = ['sample_id', 'value']
# Add new columns
df['method_id'] = 10551
df['approved'] = 'YES'
df['entered_by'] = 'JES'
df['entered_date'] = pd.datetime(2017, 2, 10)
# Reorder columns
df = df[['sample_id', 'method_id', 'value', 'approved', 'entered_by', 'entered_date']]
df.head(10)
# Write to database
df.to_sql(name='WATER_CHEMISTRY_VALUES2',
schema='RESA2',
con=engine,
index=False,
if_exists='append')
Explanation: Next, get the sample IDs associated with these 25 sites and join them to the data above.
End of explanation
<END_TASK> |
15,957 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text =[]
target_id_text =[]
for sentences in source_text.split('\n'):
sentence_out = [source_vocab_to_int[word] for word in sentences.split()]
source_id_text.append(sentence_out)
for sentences in target_text.split('\n'):
sentence_out = [target_vocab_to_int[word] for word in sentences.split()]
sentence_out.append(target_vocab_to_int['<EOS>'])
target_id_text.append(sentence_out)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
go_batch = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']),
tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])],
1)
return go_batch
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
basic_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
multi_cell = tf.contrib.rnn.MultiRNNCell([basic_cell] * num_layers)
dropout = tf.contrib.rnn.DropoutWrapper(multi_cell, keep_prob)
output, state = tf.nn.dynamic_rnn(dropout, rnn_inputs, dtype=tf.float32)
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
simple_decoder_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
output, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
simple_decoder_fn_train,
dec_embed_input,
sequence_length,
scope=decoding_scope)
logits = output_fn(tf.nn.dropout(output, keep_prob))
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
simple_decoder_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size)
logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, simple_decoder_fn_inference, scope=decoding_scope)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
basic_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(basic_cell, keep_prob)
multi_cell = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x,
vocab_size,
None,
scope=decoding_scope)
decoding_logits_train = decoding_layer_train(encoder_state,
multi_cell,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
decoding_logits_infer = decoding_layer_infer(encoder_state,
multi_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
sequence_length,
vocab_size,
decoding_scope,
output_fn,
keep_prob)
return decoding_logits_train, decoding_logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
encoder_embed_input = tf.contrib.layers.embed_sequence(input_data,
source_vocab_size,
enc_embedding_size)
encoder_output = encoding_layer(encoder_embed_input,
rnn_size,
num_layers,
keep_prob)
decoder_input = process_decoding_input(target_data,
target_vocab_to_int,
batch_size)
decoder_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
decoder_embed_input = tf.nn.embedding_lookup(decoder_embed, decoder_input)
t_logits, i_logits = decoding_layer(decoder_embed_input,
decoder_embed,
encoder_output,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob)
return t_logits, i_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
unknown_word = vocab_to_int['<UNK>']
sentence_lowercase = sentence.lower()
word_ids = [vocab_to_int.get(word, unknown_word) for word in sentence_lowercase.split()]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
<END_TASK> |
15,958 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
<END_TASK> |
15,959 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Use Word2Vec in gensim to train a word embedding model using the content from NIPS papers.
Step1: Gensim word2vec
https
Step2: Train a word2vec model
Step3: Create a representation of each paper
The representation is simply a set of embedded words taken from the abstract and the title.
Step4: Load the saved pickle and check
Step5: filter words by DF | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#config InlineBackend.figure_format = 'pdf'
from IPython.core.display import HTML
import gensim as gen
import gensim.models.word2vec as w2v
import matplotlib.pyplot as plt
from nltk.tokenize import WhitespaceTokenizer
import numpy as np
import os
import pandas as pd
try:
import cPickle as pickle
except:
import pickle
import re
import scipy.stats as stats
import scipy.sparse as sp
import string
import sys
import csv
# load the pickle containing the document-term matrix,
# put the abstracts in, and dump it to a file.
fyear = 1988
tyear = 2015
dt_fpath = 'DT_%d_%d_wabs.p'%(fyear, tyear)
with open(dt_fpath, 'r') as f:
info = pickle.load(f)
info.keys()
list_abs = info['abstracts']
list_abs[:2]
# make each abstract a list of words
list_list_abs = [ab.split(' ') for ab in list_abs if ab is not None]
print list_list_abs[20]
Explanation: Use Word2Vec in gensim to train a word embedding model using the content from NIPS papers.
End of explanation
def paper_dataframe(fpath):
rows = []
with open(fpath, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
# Each read gives ['Id', 'Title', 'EventType', 'PdfName', 'Abstract', 'PaperText']
reader.next()
for row in reader:
rows.append(tuple(row))
data = pd.DataFrame(rows, columns=['Id', 'Title', 'EventType',
'PdfName', 'Abstract', 'PaperText'])
return data
text = ',sdf,.-23\][](s)'
re.sub(r'([^\w])+', ' ', text, flags=re.DOTALL)
def tokenize_simple(text):
# replace spaces with one space
text = re.sub(r'\s+', ' ', text, flags=re.DOTALL)
# remove non-English words
text = re.sub(r'[^\w]+', ' ', text, flags=re.DOTALL)
# naive tokenization
tokens = [w.lower().strip() for w in text.split(' ') if len(w) > 1]
return tokens
dframe = paper_dataframe('Papers1988_2015.csv')
n_docs = dframe.shape[0]
tok_papers = []
tok_abstracts = []
for i in xrange(n_docs):
paper = dframe['PaperText'][i]
paper_tokens = tokenize_simple(paper)
tok_papers.append(paper_tokens)
ab = list_abs[i]
if ab is None:
ab_tokens = []
else:
ab_tokens = tokenize_simple(ab)
tok_abstracts.append(ab_tokens)
Explanation: Gensim word2vec
https://radimrehurek.com/gensim/models/word2vec.html#id6
End of explanation
# size means the latent dimension
# sentences = an iterable where each item is a list of words
size = 50
window = 5
dest_fname = 'w2v_size%d_win%d.p'%(size, window)
model = w2v.Word2Vec(tok_papers, size=size, window=window, min_count=5, workers=4)
model.save(dest_fname)
model.wv.similarity('neural', 'deep')
model.wv.similarity('neural', 'kernel')
model.wv.doesnt_match('supervised unsupervised neuron reinforcement'.split())
model.wv.doesnt_match('kernel gretton hsic mmd'.split())
model.wv['kernel']
'kernel' in model.wv
Explanation: Train a word2vec model
End of explanation
titles = info['titles']
# each element is the representation of the paper.
# This is a matrix with each row corresponding to the embedding
# of a word in the abstract and the title.
paper_reps = []
for i in xrange(n_docs):
title_tokens = tokenize_simple(titles[i])
rep_words = tok_abstracts[i] + title_tokens
# embed each word in rep_words (if in the vocabulary)
rep = []
for w in rep_words:
# only embed words that are in the vocabulary
if w in model.wv:
embed = model.wv[w]
rep.append(embed)
mat = np.vstack(rep)
paper_reps.append(mat)
len(paper_reps)
# save the pickle with the paper representations
dt_dest = 'DT_%d_%d_wembed.p'%(fyear, tyear)
info['paper_reps'] = paper_reps
with open(dt_dest, 'w') as f:
pickle.dump(info, f)
Explanation: Create a representation of each paper
The representation is simply a set of embedded words taken from the abstract and the title.
End of explanation
with open('DT_%d_%d_wembed.p'%(fyear, tyear), 'r') as f:
info = pickle.load(f)
info.keys()
DT = info['DT']
abstracts = info['abstracts']
paper_reps = info['paper_reps']
titles = info['titles']
words = info['words']
Explanation: Load the saved pickle and check
End of explanation
# document frequency of each word
n_docs = DT.shape[0]
DF = np.array( (DT > 0).sum(0) )[0]
df_lb = 7
df_ub = int(0.15*n_docs)
print('n = #docs: %d'%n_docs)
print('original #words: %d'%len(words))
print('#words with %d <= df: %d'% (df_lb, np.sum(DF>=df_lb) ) )
print('#words with df <= %d: %d'% (df_ub, np.sum(DF<=df_ub) ) )
df_I = np.logical_and(DF>=df_lb, DF<=df_ub)
print('#words with %d <= df <= %d: %d'%
(df_lb, df_ub, np.sum( df_I) ) )
df_words = np.array(words)[df_I]
print df_words.tolist()
# filter out words
fDT = DT[:, df_I]
fwords = np.array(words)[df_I].tolist()
info['DT'] = fDT
info['words'] = fwords
dffiltered_fname = 'DT_%d_%d_wem_df%d_%d.p'%(fyear, tyear, df_lb, df_ub)
with open(dffiltered_fname, 'w') as f:
pickle.dump(info, f)
Explanation: filter words by DF
End of explanation
<END_TASK> |
15,960 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using ICsound
csoundmagics includes an ICsound class which is adapted from Andrés Cabrera's icsound module. ICsound is bound to the %%csound and %csound magics command.
This notebook is an adaptation of Andrés' icsound test notebook.
Starting the Csound engine
To use ICsound create an ICsound instance
Step1: Creating an ICsound object automatically starts the engine
Step2: You can set the properties of the Csound engine with parameters to the startEngine() function.
Step3: The engine runs in a separate thread, so it doesn't block execution of python.
Step4: Use the %%csound magic command to directly type csound language code in the cell and send it to the engine. The number after the magic command is optional; it references the slot where the engine is running. If omitted, slot#1 is assumed.
Step5: So where did it print?
Step6: By default, messages from Csound are not shown, but they are stored in an internal buffer. You can view them with the printLog() function. If the log is getting too long and confusing, use the clearLog() function.
Function tables
You can create csound f-tables directly from python lists or numpy arrays
Step7: Tables can be plotted in the usual matplotlib way, but ICsound provides a plotTable function which styles the graphs.
Step8: You can get the function table values from the csound instance
Step9: Tables can also be passed by their variable name in Csound
Step10: The following will create 320 tables with 720 points each
Step11: Sending instruments
You can send instruments to a running csound engine with the %%csound magic. Any syntax errors will be displayed inline.
Step12: Channels
Csound channels can be used to send values to Csound. They can affect running instances of instruments by using the invalue/chnget opcodes
Step13: You can also read the channels from Csound. These channels can be set from ICsound or within instruments with the outvalue/chnset opcodes
Step14: Recording the output
You can record the realtime output from csound
Step15: Remote engines
You can also interact with engines through UDP. Note that not all operations are available, notably reading f-tables, but you can send instruments and note events to the remote engine.
Step16: Now send notes and instruments from the client
Step17: And show the log in the server
Step18: Stopping the engine
Step19: If we don't need cs_client anymore, we can delete its slot with the %csound line magic (note the single % sign and the negative slot#). The python instance cs_client can then be deleted
Step20: Audification
Reading Earthquake data through a web API (might take a few minutes)
Step21: Instrument to play back the earthquake data stored in a table
Step22: Listen
Step23: Slower
Step24: Quicker
Step25: Other tests
Another engine | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%load_ext csoundmagics
Explanation: Using ICsound
csoundmagics includes an ICsound class which is adapted from Andrés Cabrera's icsound module. ICsound is bound to the %%csound and %csound magics command.
This notebook is an adaptation of Andrés' icsound test notebook.
Starting the Csound engine
To use ICsound create an ICsound instance:
End of explanation
cs = ICsound(port=12894)
Explanation: Creating an ICsound object automatically starts the engine:
End of explanation
help(cs.startEngine)
Explanation: You can set the properties of the Csound engine with parameters to the startEngine() function.
End of explanation
cs.startEngine()
Explanation: The engine runs in a separate thread, so it doesn't block execution of python.
End of explanation
%%csound 1
gkinstr init 1
%%csound
print i(gkinstr)
Explanation: Use the %%csound magic command to directly type csound language code in the cell and send it to the engine. The number after the magic command is optional; it references the slot where the engine is running. If omitted, slot#1 is assumed.
End of explanation
cs.printLog()
Explanation: So where did it print?
End of explanation
cs.fillTable(1, np.array([8, 7, 9, 1, 1, 1]))
cs.fillTable(2, [4, 5, 7, 0, 8, 7, 9, 6])
Explanation: By default, messages from Csound are not shown, but they are stored in an internal buffer. You can view them with the printLog() function. If the log is getting too long and confusing, use the clearLog() function.
Function tables
You can create csound f-tables directly from python lists or numpy arrays:
End of explanation
cs.plotTable(1)
cs.plotTable(2, reuse=True)
plt.grid()
Explanation: Tables can be plotted in the usual matplotlib way, but ICsound provides a plotTable function which styles the graphs.
End of explanation
cs.table(2)
cs.makeTable(2, 1024, 10, 1)
cs.makeTable(3, 1024, -10, 0.5, 1)
cs.plotTable(2)
cs.plotTable(3, reuse=True)
#ylim((-1.1,1.1))
cs.table(2)[100: 105]
Explanation: You can get the function table values from the csound instance:
End of explanation
%%csound 1
giHalfSine ftgen 0, 0, 1024, 9, .5, 1, 0
cs.plotTable('giHalfSine')
Explanation: Tables can also be passed by their variable name in Csound:
End of explanation
randsig = np.random.random((320, 720))
i = 0
for i, row in enumerate(randsig):
cs.fillTable(50 + i, row)
print(i, '..', end=' ')
cs.plotTable(104)
Explanation: The following will create 320 tables with 720 points each:
End of explanation
%%csound 1
instr 1
asig asds
%%csound 1
instr 1
asig oscil 0.5, 440
outs asig, asig
%%csound 1
instr 1
asig oscil 0.5, 440
outs asig, asig
endin
Explanation: Sending instruments
You can send instruments to a running csound engine with the %%csound magic. Any syntax errors will be displayed inline.
End of explanation
cs.setChannel("val", 20)
Explanation: Channels
Csound channels can be used to send values to Csound. They can affect running instances of instruments by using the invalue/chnget opcodes:
End of explanation
cs.channel("val")
Explanation: You can also read the channels from Csound. These channels can be set from ICsound or within instruments with the outvalue/chnset opcodes:
End of explanation
cs.startRecord("out.wav")
cs.sendScore("i 1 0 1")
import time
time.sleep(1)
cs.stopRecord()
!aplay out.wav
Explanation: Recording the output
You can record the realtime output from csound:
End of explanation
cs_client = ICsound()
cs_client.startClient()
cs.clearLog()
Explanation: Remote engines
You can also interact with engines through UDP. Note that not all operations are available, notably reading f-tables, but you can send instruments and note events to the remote engine.
End of explanation
cs_client.sendScore("i 1 0 1")
cs_client.sendCode("print i(gkinstr)")
Explanation: Now send notes and instruments from the client:
End of explanation
cs.printLog()
Explanation: And show the log in the server:
End of explanation
cs.stopEngine()
cs
Explanation: Stopping the engine
End of explanation
%csound -2
del cs_client
Explanation: If we don't need cs_client anymore, we can delete its slot with the %csound line magic (note the single % sign and the negative slot#). The python instance cs_client can then be deleted:
End of explanation
prefix = 'http://service.iris.edu/irisws/timeseries/1/query?'
SCNL_parameters = 'net=IU&sta=ANMO&loc=00&cha=BHZ&'
times = 'starttime=2005-01-01T00:00:00&endtime=2005-01-02T00:00:00&'
output = 'output=ascii'
import urllib
f = urllib.request.urlopen(prefix + SCNL_parameters + times + output)
timeseries = f.read()
import ctcsound
data = ctcsound.pstring(timeseries).split('\n')
dates = []
values = []
for line in data[1:-1]:
date, val = line.split()
dates.append(date)
values.append(float(val))
plt.plot(values)
cs.startEngine()
cs.fillTable(1, values)
Explanation: Audification
Reading Earthquake data through a web API (might take a few minutes):
End of explanation
%%csound 1
instr 1
idur = p3
itable = p4
asig poscil 1/8000, 1/p3, p4
outs asig, asig
endin
Explanation: Instrument to play back the earthquake data stored in a table:
End of explanation
cs.sendScore('i 1 0 3 1')
Explanation: Listen:
End of explanation
cs.sendScore('i 1 0 7 1')
Explanation: Slower:
End of explanation
cs.sendScore('i 1 0 1 1')
Explanation: Quicker:
End of explanation
ics = ICsound(bufferSize=64)
ics.listInterfaces()
%%csound 2
instr 1
asig oscil 0.5, 440
outs asig, asig
endin
ics.sendScore("i 1 0 0.5")
%csound -2
del ics
cs.stopEngine()
Explanation: Other tests
Another engine:
End of explanation
<END_TASK> |
15,961 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Udacity MLND Capstone Project
"Determination of students’ interaction patterns with an intelligent tutoring system and study of their correlation with successful learning"
Step 1 (sensitivity check, run 1)
Step1: Do some preprocessing to group the data by 'Anon Stud Id' and extract features for further analysis
Step2: Note to reviewers
Step3: Reading from the scratch instead
Step4: Making backup for stud_data in HDF5 format
Step5: Choosing a student subset for a sensitivity check
(note that this step updates stud_data)
Step6: Clustering
Write a new clustering algorithm that
Step7: Choose the pair of columns with best score
Step8: Visualising the KMeans clusters
Step9: Then, consider adding one more column to further increase the score
Step10: As expected, the pair ['frac_1s_hints', 'max_probl_views'] still gives the best score.
Now, trying with different clusterers.
MeanShift
Step11: GaussianMixture
Step12: AgglomerativeClustering
Step13: Visualising the AgglomerativeClustering clusters
Step14: Further clustering of obtained KMeans groups
Step15: Visualise obtained clusters
Step16: As we see, group 01 contains more students with "gaming" behaviour, so I proceed with group 00
Step17: So, there is a subgroup 001 of 1001 students that do not use many hints. What about the rest (000, 5482 students)?
Step18: Splitting group 0000 (students with large 'num_sess' and 'num_probs')
Step19: As we see, these two groups represent students with "intermediate experience" (00000) and "largest experience" (00001).
During this sensitivity check, I splitted 8082 students (90% of ASSISTments students) into 6 different groups | <ASSISTANT_TASK:>
Python Code:
# Select test_size and random_state for splitting a subset
test_size=0.1
random_state=0
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import time
import gzip
import shutil
import seaborn as sns
from collections import Counter
from sklearn.mixture import GaussianMixture
from sklearn.cluster import KMeans, MeanShift, estimate_bandwidth, AgglomerativeClustering
from sklearn.metrics import silhouette_score #, make_scorer
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.neighbors import kneighbors_graph
from sklearn.model_selection import train_test_split
Explanation: Udacity MLND Capstone Project
"Determination of students’ interaction patterns with an intelligent tutoring system and study of their correlation with successful learning"
Step 1 (sensitivity check, run 1)
End of explanation
def hdf_fixed_write_compress(df):
df.to_hdf('data1-step1.hdf','test',mode='w',complib='blosc')
return
def hdf_fixed_read_compress():
df = pd.read_hdf('data.hdf','test')
return df
with gzip.open('data1.hdf.gz', 'rb') as f_in, open('data.hdf', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
!ls -lh data.hdf
data = hdf_fixed_read_compress()
data.head()
Explanation: Do some preprocessing to group the data by 'Anon Stud Id' and extract features for further analysis
End of explanation
def prepare_stud_data_new(df):
start_time = time.time()
stud_list = df['Anon Student Id'].unique()
cols=['num_sess', \
'num_days', \
'num_probs', \
'num_atts', \
'num_hints', \
'frac_corr_atts', \
'frac_3s_atts', \
'frac_1s_hints', \
'time_atts', \
'time_hints', \
'max_probl_views', \
'max_atts']
numbers = []
#stud_data = pd.DataFrame(columns=cols)
stud_info_df = pd.DataFrame()
i = 0
for stud_name in stud_list:
stud_info_df = df[df['Anon Student Id'] == stud_name].copy()
# total number of days loading the system
num_days = len(set(stud_info_df['Day']))
# total number of sessions opened
num_sessions = len(set(stud_info_df['Session Id']))
# total number of problems entered
num_problems = len(set(stud_info_df['Problem Name']))
# total number of attempts made by the student
num_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0].shape[0]
# total number of hints made by the student
num_hints = stud_info_df[stud_info_df['Student Response Type'] == 1].shape[0]
# fraction of short attemps (with time <= 3 sec)
if (num_attempts > 0):
frac_3s_atts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Duration (sec)'] <= 3.0)].shape[0] / num_attempts
else:
frac_3s_atts = 0
# fraction of short hints (with time <= 1 sec)
if (num_hints > 0):
frac_1s_hints = stud_info_df[(stud_info_df['Student Response Type'] == 1) & (stud_info_df['Duration (sec)'] <= 1.0)].shape[0] / num_hints
else:
frac_1s_hints = 0
# fraction of correct attempts
if (num_attempts > 0):
fraction_correct_attempts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Outcome'] == 0)].shape[0] / num_attempts
else:
fraction_correct_attempts = 0
# total number of time spent for attempts (in seconds)
total_time_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0]['Duration (sec)'].sum()
# total number of time spent for hints (in seconds)
total_time_hints = stud_info_df[stud_info_df['Student Response Type'] == 1]['Duration (sec)'].sum()
# averaged maximal numbers of 'Problem View'
avg_max_problem_views = stud_info_df[['Problem Name', 'Problem View']].groupby(['Problem Name']).agg(np.max).mean()[0]
# averaged maximal number of attempts ('x')
avg_max_attempts = stud_info_df[['Problem Name', 'x']].groupby(['Problem Name']).agg(np.max).mean()[0]
stud_name = i # assign unique numerical ID to each student
if num_attempts != 0:
avd_time_att = total_time_attempts / num_attempts
else:
avg_time_att = 0
if num_hints != 0:
avg_time_hint = total_time_hints / num_hints
else:
avg_time_hint = 0
numbers.append([num_sessions, \
num_days, \
num_problems, \
num_attempts, \
num_hints, \
fraction_correct_attempts, \
frac_3s_atts, \
frac_1s_hints, \
total_time_attempts, \
total_time_hints, \
avg_max_problem_views, \
avg_max_attempts])
print("\r\t>>> Progress\t:{:.4%}".format((i + 1)/len(stud_list)), end='')
i += 1
stud_data = pd.DataFrame(data=numbers, columns=cols)
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
return stud_data
Explanation: Note to reviewers: this algorithm is quite slow (~45 minutes), so you may consider processing a substantial subset of data (e.g. processing 500,000 rows takes only ~1 minute).
End of explanation
#stud_data = prepare_stud_data_new(data.head(500000).copy())
#stud_data = prepare_stud_data_new(data.copy())
stud_data = pd.read_hdf('stud_data.hdf','test')
Explanation: Reading from the scratch instead:
End of explanation
#stud_data.to_hdf('stud_data.hdf','test',mode='w',complib='blosc')
stud_data.shape
stud_data.describe()
Explanation: Making backup for stud_data in HDF5 format:
End of explanation
print(test_size, random_state)
stud_data_1, stud_data_2 = train_test_split(stud_data, test_size=test_size, random_state=random_state)
stud_data_1.shape[0]/stud_data.shape[0]
stud_data = stud_data_1
Explanation: Choosing a student subset for a sensitivity check
(note that this step updates stud_data):
End of explanation
# old name: process_data
def transform_data(selected_columns, data):
'''
Apply log-transform and MinMaxScaler() to the selected data columns which are not fractions (frac_*)
Parameters
==========
selected_columns : list
list of columns to leave in processed data
data : pandas.DataFrame
data to process (note that data should contain all selected_columns)
Returns
=======
log_scaled_data : pandas.DataFrame
log-transformed and scaled data selected by selected_columns
'''
data.reset_index(drop=True, inplace=True)
log_data = data[selected_columns].copy()
skewed = log_data.columns.tolist()
skewed = [item for item in skewed if not item.startswith('frac_')]
log_data[skewed] = log_data[skewed].apply(lambda x: np.log10(x + 1))
scaler = MinMaxScaler().fit(log_data)
log_scaled_data = scaler.transform(log_data)
log_scaled_data = pd.DataFrame(log_scaled_data, columns=log_data.columns)
return log_scaled_data
def replace_group_numbers(best_preds):
'''
Replace group numbers in best_preds with sorting by group size
(so that the largest group is 0, the second largest is 1 etc.)
Parameters
==========
best_preds : numpy array
unsorted array of predictions
Returns
=======
best_preds_sorted : numpy array
sorted array of predictions
'''
pp = pd.DataFrame(best_preds, columns = ["old_group"])
dict_pp = {item[0]: i for i, item in enumerate(Counter(best_preds).most_common())}
pp['new_group'] = pp['old_group'].replace(dict_pp)
best_preds_sorted = np.array(pp['new_group'])
return best_preds_sorted
def kmeans(log_scaled_data):
'''
Apply KMeans clustering algorithm with 2 <= cluster_number <= 6 to log_scaled_data
(transformed and scaled by transform_data() function)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
for n_clusters in range(2,6):
clusterer = KMeans(n_clusters=n_clusters, n_init=10, random_state=0)
clusterer.fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
if best_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
best_score = score
best_clusters = n_clusters
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, best_score, best_preds_sorted
Explanation: Clustering
Write a new clustering algorithm that:
- starts from stud_data or its subset (with monotonic index)
- finds a 2-column set with the largest score (using KMeans) and renames it that 0 is the largest group, 1 is the second largest etc.
- returns index file (with indices 0, 1, ) that could be used for further analysis
End of explanation
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', 'num_hints', 'frac_corr_atts', \
'frac_3s_atts', 'frac_1s_hints', 'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
def choose_pair_columns_kmeans(all_columns, log_scaled_all_data):
'''
Selects pair of columns in data that produces clusters with the largest score.
In this function, only KMeans clustering algorithm is used
Parameters
==========
all_columns : list
list of columns to look for the pair with the largest score
log_scaled_data : pandas DataFrame
properly scaled DataFrame with all columns
Returns
=======
best_columns : list
pair of data columns with the largest score
best_score : float
the largest value of the score
best_clusterer : sklearn Model
clustering algorithm with the largest score
best_preds : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
best_columns = []
j = 0
l = len(all_columns)
num_pairs = (l-1)*l/2
for column in all_columns:
selected_columns = [column]
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
for column1 in columns_to_add:
if all_columns.index(column) < all_columns.index(column1):
selected_columns = [column, column1]
print("\r\t>>> Progress\t:{:.4%}".format((j+1)/num_pairs), end='')
j += 1
#log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_all_data[selected_columns])
if score > best_score:
best_score = score
best_clusterer = clusterer
best_preds = preds
best_columns = selected_columns.copy()
return best_columns, best_score, best_clusterer, best_preds
start_time = time.time()
log_scaled_all_data = transform_data(all_columns, stud_data)
# consider skipping the step below because it takes some time (~5 minutes)
best_columns, best_kmeans_score, best_kmeans_clusterer, best_kmeans_preds = choose_pair_columns_kmeans(all_columns, log_scaled_all_data)
# Instead run it single time (6 seconds only)
#best_columns = ['frac_1s_hints', 'max_probl_views']
#best_kmeans_clusterer, best_kmeans_score, best_kmeans_preds = kmeans(log_scaled_all_data[best_columns])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_columns)
print("\t>>> Best score:", best_kmeans_score)
print("\t>>> Best clusterer:", best_kmeans_clusterer)
print("\t>>> Best preds:", best_kmeans_preds)
def preds_to_indices(preds): # gives array and returns array of indices with 1s
new_list = []
for i, val in enumerate(preds):
if val == 1:
new_list.append(i)
return np.array(new_list)
Explanation: Choose the pair of columns with best score:
End of explanation
log_scaled_all_data.describe()
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
log_scaled_all_data_kmeans_1 = log_scaled_all_data.copy()[log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
plt.scatter(log_scaled_all_data_kmeans_0['frac_1s_hints'], \
log_scaled_all_data_kmeans_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_1['frac_1s_hints'], \
log_scaled_all_data_kmeans_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views', size=14)
plt.plot((0.14, 0.14), (0.001, 0.399), 'k--', c='blue')
plt.show()
print(log_scaled_all_data_kmeans_0.shape, log_scaled_all_data_kmeans_1.shape)
Explanation: Visualising the KMeans clusters:
End of explanation
def cols_iterate_kmeans(selected_columns, best_score, best_clusterer, best_preds):
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', \
'num_hints', 'frac_corr_atts', 'frac_3s_atts', 'frac_1s_hints', \
'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
#print(columns_to_add)
for column in columns_to_add:
print("*"*40)
print("*** Trying to add column", column)
print("*"*40)
selected_columns.append(column)
log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_data)
if score > best_score:
print("!!! Success !!!")
best_score = score
best_clusterer = clusterer
best_preds = preds
print("!!! New score is", best_score)
print("!!! New best clusterer is", best_clusterer)
print("!!! New best selected_columns are", selected_columns)
columns_to_add.remove(column)
else:
print("!!! Last score is equal or worse then our best one")
print("!!! According to Occam's razor, remove the column", column)
selected_columns.remove(column)
print("!!! Still the best selected columns are", selected_columns)
return selected_columns, best_score, best_clusterer, best_preds
# Just skip this step, it does not give new results:
kmeans_clusterer = best_kmeans_clusterer
kmeans_score = best_kmeans_score
kmeans_preds = best_kmeans_preds
selected_columns = best_columns # ['frac_1s_hints', 'max_probl_views']
new_columns, new_kmeans_score, new_kmeans_clusterer, new_kmeans_preds = cols_iterate_kmeans(selected_columns, kmeans_score, kmeans_clusterer, kmeans_preds)
if new_kmeans_score > kmeans_score:
print("+++ SUCCESS")
selected_columns = new_columns
best_kmeans_score = new_kmeans_score
best_kmeans_clusterer = new_kmeans_clusterer
best_kmeans_preds = new_kmeans_preds
else:
print("--- GIVE UP")
Explanation: Then, consider adding one more column to further increase the score:
End of explanation
def largest_cluster_fraction(preds):
'''
calculates the fraction of students that are in the largest group
Parameters
==========
preds : list
list of predictions
Returns
=======
fraction : float
largest fraction of students
best_i : integer
number of the largest group
'''
fraction = 0
ll = len(preds)
for i in np.unique(preds):
frac = len(preds[preds == i])/ll
if frac > fraction:
fraction = frac
best_i = i
return fraction, best_i
# Rewrite similar to kmeans procedure !!!
def meanshift(log_scaled_data):
'''
Apply MeanShift clustering algorithm to log_scaled_data
(transformed and scaled by transform_data() function)
Number of clusters is selected according to estimate_badwidth procedure
with quantiles in np.linspace(0.01, 0.99, 99)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
cluster_frac : float
fraction of students inside the largest group
'''
start_time = time.time()
best_score = 0
best_cluster_frac = 0
for alpha in np.linspace(0.01, 0.99, 99):
bandwidth = estimate_bandwidth(log_scaled_data, quantile=alpha, n_samples=None, random_state=0)
clusterer = MeanShift(bandwidth=bandwidth, bin_seeding=True)
clusterer.fit(log_scaled_data)
preds = clusterer.fit_predict(log_scaled_data)
cluster_frac = largest_cluster_fraction(preds)[0]
# Calculate the mean silhouette coefficient for the number of clusters chosen
try:
score = silhouette_score(log_scaled_data, preds)
except ValueError:
score = 0
print(alpha, clusterer.cluster_centers_.shape[0], score, cluster_frac)
# setting cluster_frac > 0.85, the value obtained in KMeans algorithm for ['frac_1s_hints', 'max_probl_views']
if (best_score < score) and (cluster_frac < 0.85):
best_clusterer = clusterer
best_preds = preds
best_score = score
best_clusters = clusterer.cluster_centers_.shape[0]
best_cluster_frac = cluster_frac
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", best_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
cluster_frac = best_cluster_frac
return best_clusterer, best_score, best_preds_sorted, cluster_frac
# Rinning MeanShift is too slow: runs about 9 min for 1 pair,
# and produces too bad results (largest score = 0.56 for reasonable max_fractions < 0.85)
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds, _ = meanshift(log_scaled_data)
print(best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
Explanation: As expected, the pair ['frac_1s_hints', 'max_probl_views'] still gives the best score.
Now, trying with different clusterers.
MeanShift:
End of explanation
def gaussmix(log_scaled_data): # GaussianMixture
start_time = time.time()
max_score = 0
for n_clusters in range(2,6):
clusterer = GaussianMixture(random_state=0, n_init=50, n_components=n_clusters).fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_components, "clusters, the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_gaussmix(log_scaled_data):
best_score = 0
print(">>> GaussianMixture:")
clusterer, score, preds = gaussmix(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# ~0.6 min running time but very small score (~0.39)
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds = run_clustering_gaussmix(log_scaled_data)
print(gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
Explanation: GaussianMixture:
End of explanation
def agglom(log_scaled_data): # AgglomerativeClustering with 'ward' connectivity
start_time = time.time()
max_score = 0
for n_clusters in range(2,3): # use only 2 clusters
connectivity = kneighbors_graph(log_scaled_data, n_neighbors=100, include_self=False)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
clusterer = AgglomerativeClustering(n_clusters=n_clusters, \
linkage='ward', \
connectivity=connectivity)
preds = clusterer.fit_predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_clusters, "clusters, and the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = preds
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_agglom(log_scaled_data):
best_score = 0
print(">>> AgglomerativeClustering:")
clusterer, score, preds = agglom(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# Gives results very similar to KMeans but takes ~4 times more running time
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_agglom_clusterer, best_agglom_score, best_agglom_preds = run_clustering_agglom(log_scaled_data)
print(best_agglom_clusterer, best_agglom_score, best_agglom_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
Explanation: AgglomerativeClustering:
End of explanation
best_agglom_preds_mask = preds_to_indices(best_agglom_preds)
log_scaled_data_agglom_0 = log_scaled_data.copy()[~log_scaled_data.index.isin(best_agglom_preds_mask)]
log_scaled_data_agglom_1 = log_scaled_data.copy()[log_scaled_data.index.isin(best_agglom_preds_mask)]
plt.scatter(log_scaled_data_agglom_0['frac_1s_hints'], \
log_scaled_data_agglom_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_data_agglom_1['frac_1s_hints'], \
log_scaled_data_agglom_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views', size=14)
#plt.plot((0.145, 0.145), (0.001, 0.399), 'k--', c='blue')
plt.show()
Explanation: Visualising the AgglomerativeClustering clusters:
End of explanation
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
# In this particular splitting, take drop=False to save the initial index
# (simplifying students recovery for step 2)
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=False)
log_scaled_all_data_kmeans_0.index
start_time = time.time()
best_kmeans_columns_0, \
best_kmeans_score_0, \
best_kmeans_clusterer_0, \
best_kmeans_preds_0 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0)
# best_kmeans_columns_0 = ['frac_3s_atts', 'max_probl_views']
# best_kmeans_clusterer_0, best_kmeans_score_0, best_kmeans_preds_0 = kmeans(log_scaled_all_data_kmeans_0[best_kmeans_columns_0])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0)
print("\t>>> Best score:", best_kmeans_score_0)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0)
print("\t>>> Best preds:", best_kmeans_preds_0)
print(sum(best_kmeans_preds_0), len(best_kmeans_preds_0), len(best_kmeans_preds_0[best_kmeans_preds_0 == 0]))
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=True)
Explanation: Further clustering of obtained KMeans groups:
I start from group 0 that contains 6934 students:
End of explanation
best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0)
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_01 = log_scaled_all_data_kmeans_0.copy()[log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
plt.scatter(log_scaled_all_data_kmeans_00[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_00[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_01[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_01[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0[0], size=14)
ax.set_ylabel(best_kmeans_columns_0[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
Explanation: Visualise obtained clusters:
End of explanation
len(best_kmeans_preds_0)
#best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0) # already implemented during group0 visualisation
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_00.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_00.index
start_time = time.time()
best_kmeans_columns_00, \
best_kmeans_score_00, \
best_kmeans_clusterer_00, \
best_kmeans_preds_00 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_00)
# best_kmeans_columns_00 = ['frac_3s_atts', 'time_hints']
# best_kmeans_clusterer_00, \
# best_kmeans_score_00, \
# best_kmeans_preds_00 = kmeans(log_scaled_all_data_kmeans_00[best_kmeans_columns_00])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_00)
print("\t>>> Best score:", best_kmeans_score_00)
print("\t>>> Best clusterer:", best_kmeans_clusterer_00)
print("\t>>> Best preds:", best_kmeans_preds_00)
print(sum(best_kmeans_preds_00), len(best_kmeans_preds_00), len(best_kmeans_preds_00[best_kmeans_preds_00 == 0]))
best_kmeans_preds_mask_00 = preds_to_indices(best_kmeans_preds_00)
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_001 = log_scaled_all_data_kmeans_00.copy()[log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
plt.scatter(log_scaled_all_data_kmeans_000[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_000[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_001[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_001[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_00[0], size=14)
ax.set_ylabel(best_kmeans_columns_00[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
Explanation: As we see, group 01 contains more students with "gaming" behaviour, so I proceed with group 00:
End of explanation
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_000.index
start_time = time.time()
best_kmeans_columns_000, \
best_kmeans_score_000, \
best_kmeans_clusterer_000, \
best_kmeans_preds_000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_000)
# best_kmeans_columns_000 = ['num_sess', 'num_probs']
# best_kmeans_clusterer_000, \
# best_kmeans_score_000, \
# best_kmeans_preds_000 = kmeans(log_scaled_all_data_kmeans_000[best_kmeans_columns_000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_000)
print("\t>>> Best score:", best_kmeans_score_000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_000)
print("\t>>> Best preds:", best_kmeans_preds_000)
print(sum(best_kmeans_preds_000), len(best_kmeans_preds_000), len(best_kmeans_preds_000[best_kmeans_preds_000 == 0]))
best_kmeans_preds_mask_000 = preds_to_indices(best_kmeans_preds_000)
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0001 = log_scaled_all_data_kmeans_000.copy()[log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
plt.scatter(log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='grey')
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_000[0], size=14)
ax.set_ylabel(best_kmeans_columns_000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
Explanation: So, there is a subgroup 001 of 1001 students that do not use many hints. What about the rest (000, 5482 students)?
End of explanation
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_0000.index
start_time = time.time()
best_kmeans_columns_0000, \
best_kmeans_score_0000, \
best_kmeans_clusterer_0000, \
best_kmeans_preds_0000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0000)
# best_kmeans_columns_0000 = ['num_sess', 'num_probs']
# best_kmeans_clusterer_0000, \
# best_kmeans_score_0000, \
# best_kmeans_preds_0000 = kmeans(log_scaled_all_data_kmeans_0000[best_kmeans_columns_0000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0000)
print("\t>>> Best score:", best_kmeans_score_0000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0000)
print("\t>>> Best preds:", best_kmeans_preds_0000)
print(sum(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000[best_kmeans_preds_0000 == 0]))
best_kmeans_preds_mask_0000 = preds_to_indices(best_kmeans_preds_0000)
log_scaled_all_data_kmeans_00000 = log_scaled_all_data_kmeans_0000.copy()[~log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
log_scaled_all_data_kmeans_00001 = log_scaled_all_data_kmeans_0000.copy()[log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
plt.scatter(log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0000[0], size=14)
ax.set_ylabel(best_kmeans_columns_0000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
Explanation: Splitting group 0000 (students with large 'num_sess' and 'num_probs')
End of explanation
group1_index = np.array(log_scaled_all_data_kmeans_1.index)
len(group1_index)
group2_index = np.array(log_scaled_all_data_kmeans_01['index'])
len(group2_index)
group3_index = np.array(log_scaled_all_data_kmeans_001['index'])
len(group3_index)
group4_index = np.array(log_scaled_all_data_kmeans_0001['index'])
len(group4_index)
group5_index = np.array(log_scaled_all_data_kmeans_00000['index'])
len(group5_index)
group6_index = np.array(log_scaled_all_data_kmeans_00001['index'])
len(group6_index)
def create_joint_cluster_index():
'''
Saves group index files into cluster_index.csv for further analysis
'''
cluster_index_lst = []
for i in range(len(stud_data)+1):
if i in group1_index:
cluster_index_lst.append(1)
elif i in group2_index:
cluster_index_lst.append(2)
elif i in group3_index:
cluster_index_lst.append(3)
elif i in group4_index:
cluster_index_lst.append(4)
elif i in group5_index:
cluster_index_lst.append(5)
elif i in group6_index:
cluster_index_lst.append(6)
print(Counter(cluster_index_lst))
cluster_index = pd.Series(cluster_index_lst, dtype=int)
cluster_index.to_csv('cluster_index_run1.csv')
return
create_joint_cluster_index()
! ls -lh cluster_index_run1.csv
Explanation: As we see, these two groups represent students with "intermediate experience" (00000) and "largest experience" (00001).
During this sensitivity check, I splitted 8082 students (90% of ASSISTments students) into 6 different groups:
- group 1, 1148 students with large 'frac_1s_hints' ("gaming" behaviour);
- group 2, 451 students with small 'frac_1s_hints' and large 'frac_3s_atts' ("gaming" behaviour);
- group 3, 1001 students with small 'time_hints' ("non-gaming" behaviour, small usage of hints);
- group 4, 2151 students with small 'num_sess' and 'num_probs' ("non-gaming" behaviour, large usage of hints, small experience);
- group 5, 1734 students with medium 'num_sess' and 'num_probs' ("non-gaming" behaviour, large usage of hints, medium experience);
- group 6, 1597 students with large 'num_sess' and 'num_probs' ("non-gaming" behaviour, large usage of hints, large experience).
The final result of this step is the joint cluster index that contains numbers 1-6 for each student:
End of explanation
<END_TASK> |
15,962 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction to NLTK
We have seen how to do some basic text processing in Python, now we introduce an open source framework for natural language processing that can further help to work with human languages
Step1: Tokens
The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.
We have seen how we can extract tokens by splitting the text at the blank spaces.
NTLK has a function word_tokenize() for it
Step2: 21 tokens extracted, which include words and punctuation.
Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens
Step3: And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time
Step4: As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens.
Sentences
NTLK has a function to tokenise a text not in words but in sentences.
Step5: As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99).
It also splits correctly sentences after question or exclamation marks but not after commas.
Step6: Most common tokens
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.
Its most_common() method then returns a list of tuples where each tuple is of the form (token, frequency). The list is sorted in descending order of frequency.
Step7: Comma is the most common
Step8: We can also remove any capital letters before tokenising
Step9: Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ...
As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis.
Meaningful most common tokens
A simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
Step10: This would work but would leave out also tokens such as I and you which are actually significative.
The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words.
NLTK has a corpus of stop words in several languages
Step11: Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
Step12: Stemming
Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb).
Stemmer is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
Step13: And now we apply one of the NLTK stemmer, the Porter stemmer
Step14: As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
Step15: Now the word princ is counted 281 times, exactly like the sum of prince and princes.
A note here
Step16: Lemma
Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.
While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration.
NLTK has also a built-in lemmatiser, so let's see it in action
Step17: We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
Step18: We get a different result if we say that the words are verbs.
They have all the same lemma, in fact they could be all different inflections or conjugation of a verb.
The type of words that can be used are
Step19: It works with different adjectives, it doesn't look only at prefixes and suffixes.
You would wonder why stemmers are used, instead of always using lemmatisers
Step20: Yes, the lemma now is prince.
But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word.
Part of speech (PoS)
In traditional grammar, a part of speech (abbreviated form
Step21: The NLTK function pos_tag() will tag each token with the estimated PoS.
NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function
Step22: Which are the most common PoS in The Prince book?
Step24: It's not nouns (NN) but interections (IN) such as preposition or conjunction.
Extra note | <ASSISTANT_TASK:>
Python Code:
sampleText1 = "The Elephant's 4 legs: THE Pub! You can't believe it or can you, the believer?"
sampleText2 = "Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29."
Explanation: Introduction to NLTK
We have seen how to do some basic text processing in Python, now we introduce an open source framework for natural language processing that can further help to work with human languages: NLTK (Natural Language ToolKit).
Tokenise a text
Let's start with a simple text in a Python string:
End of explanation
import nltk
s1Tokens = nltk.word_tokenize(sampleText1)
s1Tokens
len(s1Tokens)
Explanation: Tokens
The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.
We have seen how we can extract tokens by splitting the text at the blank spaces.
NTLK has a function word_tokenize() for it:
End of explanation
s2Tokens = nltk.word_tokenize(sampleText2)
s2Tokens
Explanation: 21 tokens extracted, which include words and punctuation.
Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't".
Let's see another example:
End of explanation
# If you would like to work with the raw text you can use 'bookRaw'
with open('../datasets/ThePrince.txt', 'r') as f:
bookRaw = f.read()
bookTokens = nltk.word_tokenize(bookRaw)
bookText = nltk.Text(bookTokens) # special format
nBookTokens= len(bookTokens) # or alternatively len(bookText)
print ("*** Analysing book ***")
print ("The book is {} chars long".format (len(bookRaw)))
print ("The book has {} tokens".format (nBookTokens))
Explanation: And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
End of explanation
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text1)
len(sentences)
sentences
Explanation: As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens.
Sentences
NTLK has a function to tokenise a text not in words but in sentences.
End of explanation
sentences = nltk.sent_tokenize(bookRaw) # extract sentences
nSent = len(sentences)
print ("The book has {} sentences".format (nSent))
print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
Explanation: As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99).
It also splits correctly sentences after question or exclamation marks but not after commas.
End of explanation
def get_top_words(tokens):
# Calculate frequency distribution
fdist = nltk.FreqDist(tokens)
return fdist.most_common()
topBook = get_top_words(bookTokens)
# Output top 20 words
topBook[:20]
Explanation: Most common tokens
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.
Its most_common() method then returns a list of tuples where each tuple is of the form (token, frequency). The list is sorted in descending order of frequency.
End of explanation
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400]
topWords
Explanation: Comma is the most common: we need to remove the punctuation.
Most common alphanumeric tokens
We can use isalpha() to check if the token is a word and not punctuation.
End of explanation
def preprocessText(text, lowercase=True):
if lowercase:
tokens = nltk.word_tokenize(text.lower())
else:
tokens = nltk.word_tokenize(text)
return [word for word in tokens if word.isalpha()]
bookWords = preprocessText(bookRaw)
topBook = get_top_words(bookWords)
# Output top 20 words
topBook[:20]
print ("*** Analysing book ***")
print ("The text has now {} words (tokens)".format (len(bookWords)))
Explanation: We can also remove any capital letters before tokenising:
End of explanation
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80]
sorted(meaningfulWords)
Explanation: Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ...
As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis.
Meaningful most common tokens
A simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
End of explanation
from nltk.corpus import stopwords
stopwordsEN = set(stopwords.words('english')) # english language
betterWords = [w for w in bookWords if w not in stopwordsEN]
topBook = get_top_words(betterWords)
# Output top 20 words
topBook[:20]
Explanation: This would work but would leave out also tokens such as I and you which are actually significative.
The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words.
NLTK has a corpus of stop words in several languages:
End of explanation
'princes' in betterWords
betterWords.count("prince") + betterWords.count("princes")
Explanation: Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
End of explanation
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
Explanation: Stemming
Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb).
Stemmer is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
End of explanation
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
Explanation: And now we apply one of the NLTK stemmer, the Porter stemmer:
End of explanation
stemmedWords = [porter.stem(w) for w in betterWords]
topBook = get_top_words(stemmedWords)
topBook[:20] # Output top 20 words
Explanation: As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
End of explanation
from nltk.stem.snowball import SnowballStemmer
stemmerIT = SnowballStemmer("italian")
inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi"
wordsIT = inputIT.split(' ')
[stemmerIT.stem(w) for w in wordsIT]
Explanation: Now the word princ is counted 281 times, exactly like the sum of prince and princes.
A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes.
Prince and princes become princ.
A different flavour is the lemmatisation that we will see in one second, but first a note about stemming in other languages than English.
Stemming in other languages
Snowball is an improvement created by Porter: a language to create stemmers and have rules for many more languages than English.
For example Italian:
End of explanation
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
words1
[lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
Explanation: Lemma
Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.
While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration.
NLTK has also a built-in lemmatiser, so let's see it in action:
End of explanation
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
Explanation: We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
End of explanation
words2 = ['good', 'better']
[porter.stem(w) for w in words2]
[lemmatizer.lemmatize(w, 'a') for w in words2]
Explanation: We get a different result if we say that the words are verbs.
They have all the same lemma, in fact they could be all different inflections or conjugation of a verb.
The type of words that can be used are:
'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
End of explanation
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords]
topBook = get_top_words(lemmatisedWords)
topBook[:20] # Output top 20 words
Explanation: It works with different adjectives, it doesn't look only at prefixes and suffixes.
You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough.
Now we lemmatise the book:
End of explanation
text1 = "Children shouldn't drink a sugary drink before bed."
tokensT1 = nltk.word_tokenize(text1)
nltk.pos_tag(tokensT1)
Explanation: Yes, the lemma now is prince.
But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word.
Part of speech (PoS)
In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatical properties.
For example, an adjective (red, big, quiet, ...) describe properties while a verb (throw, walk, have) describe actions or states.
Commonly listed parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection.
End of explanation
nltk.help.upenn_tagset('RB')
Explanation: The NLTK function pos_tag() will tag each token with the estimated PoS.
NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
End of explanation
tokensAndPos = nltk.pos_tag(bookTokens)
posList = [thePOS for (word, thePOS) in tokensAndPos]
fdistPos = nltk.FreqDist(posList)
fdistPos.most_common(5)
nltk.help.upenn_tagset('IN')
Explanation: Which are the most common PoS in The Prince book?
End of explanation
# Parsing sentence structure
text2 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring(
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
)
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text2)
for tree in trees:
print(tree)
Explanation: It's not nouns (NN) but interections (IN) such as preposition or conjunction.
Extra note: Parsing the grammar structure
Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective?
Tagging a PoS depends on the context, which can be ambiguous.
Making sense of a sentence is easier if it follows a well-defined grammatical structure, such as : subject + verb + object
NLTK allows to define a formal grammar which can then be used to parse a text. The NLTK ChartParser is a procedure for finding one or more trees (sentences have internal organisation that can be represented using a tree) corresponding to a grammatically well-formed sentence.
End of explanation
<END_TASK> |
15,963 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
SciPy for Economists
Scipy provides many user-friendly and efficient numerical routines, e.g. numerical integration and optimization. The full documentation is available at http
Step1: Let us have a look at the relationship.
Step2: Estimation using Linear Algebra Tools
We can determine the $\hat{\beta} = (X^T X)^{-1}X^T Y$ using basic linear algebra tools from NumPy.
Step3: Estimation using Optimization Tools
Let us now determine $\hat{\beta}$ using Maximum Likelihood (ML) estimation. So, we need to maximize the following log-likelihood function
Step4: Formatting | <ASSISTANT_TASK:>
Python Code:
# standard library
import numpy as np
# Parametrization
num_agents = 1000
num_covars = 3
betas_true = np.array([0.22, 0.30, -0.1]).T
sd_true = 0.01
# Sampling of observables
np.random.seed(123)
X = np.random.rand(num_agents, num_covars)
X[:,0] = 1
# Sampling disturbances
eps = np.random.normal(loc=0.0, scale=sd_true, size=num_agents)
# Create endogenous outcome
idx_true = np.dot(X, betas_true)
Y = idx_true + eps
# Checks
assert (X.dtype == 'float')
assert (Y.dtype == 'float')
assert (np.all(np.isfinite(X)))
assert (np.all(np.isfinite(Y)))
assert (X.shape == (num_agents, num_covars))
assert (Y.shape == (num_agents, ))
assert (np.all(X[:,0] == 1.0))
Explanation: SciPy for Economists
Scipy provides many user-friendly and efficient numerical routines, e.g. numerical integration and optimization. The full documentation is available at http://docs.scipy.org/doc/scipy/reference/.
We will use the provided tools to simulate and estimate and Ordinary Least Squares (OLS) regression: $Y = X\beta + \epsilon$
We will proceed in three steps:
Simulated Sample
Estimate Model usig Linear Algebra Tools (NumPy)
Estimate Model usig Optimization Tools (SciPy)
Of course, OLS and other statistical models are readily available in the StatsModels Library http://statsmodels.sourceforge.net/.
Simulate Sample
End of explanation
%pylab inline
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylabel(r'$Y$'), ax.set_xlabel(r'$ X\beta $')
ax.plot(idx_true, Y, 'o')
Explanation: Let us have a look at the relationship.
End of explanation
# Let us get the estimates.
beta_hat = np.dot(np.dot(np.linalg.inv(np.dot(X.T,X)), X.T), Y)
sd_hat = np.sqrt(np.var(Y - np.dot(X, beta_hat)))
# Let us have a look now.
print('Results for beta', beta_hat, ' Results for sd', sd_hat)
Explanation: Estimation using Linear Algebra Tools
We can determine the $\hat{\beta} = (X^T X)^{-1}X^T Y$ using basic linear algebra tools from NumPy.
End of explanation
# standard library
from scipy.optimize import minimize
from scipy.stats import norm
# Auxiliary functions.
def sample_likelihood(paras, X, Y):
''' Construct sample likelihood.
'''
# Antibugging.
assert (isinstance(paras, np.ndarray))
assert (paras.dtype == 'float')
assert (X.ndim == 2), (Y.ndim == 2)
# Auxiliary objects.
num_agents = Y.shape[0]
# Summing over the sample.
contribs = 0.0
for i in range(num_agents):
contrib = individual_likelihood(paras, X[i,:], Y[i])
contribs += contrib
# Modifications.
contribs = np.mean(contribs)
# Finishing.
return contribs
def individual_likelihood(paras, x, y):
''' This function determines the an individual's contribution to the sample likelihood.
'''
# Antibugging.
assert (isinstance(paras, np.ndarray))
assert (paras.dtype == 'float')
assert (x.ndim == 1), (y.ndim == 1)
# Distribute parameters.
betas, sd = paras[:-1], paras[-1]
# Calculate likelihood contribution.
resid = (y - np.dot(x, betas))/sd
contrib = (1.0/sd)*norm.pdf(resid)
# Modifications.
contrib = np.clip(contrib, 1e-20, np.inf)
contrib = -np.log(contrib)
# Finishing.
return contrib
''' Main calculations.
'''
# Construct parameters.
paras = np.concatenate((betas_true, [sd_true]))
# Single individual.
individual_likelihood(paras, X[1,:], Y[1])
# Full sample.
sample_likelihood(paras, X, Y)
# Optimization.
x0 = paras
#x0 = [0.0, 0.0, 0.0, 1.0]
for optimizer in ['BFGS', 'Nelder-Mead']:
rslt = minimize(sample_likelihood, x0, args=(X, Y), method=optimizer)
Explanation: Estimation using Optimization Tools
Let us now determine $\hat{\beta}$ using Maximum Likelihood (ML) estimation. So, we need to maximize the following log-likelihood function:
$$L(\beta, \sigma) = \sum_{i = 1, ..., N} \log\left(\frac{1}{\sigma}\phi\left(\frac{Y_i - X_i\beta}{\sigma}\right)\right)$$
SciPy offers a convenient interface to alternative optimization algorithms. Let us check it out online.
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1Ki3iXw').read())
Explanation: Formatting
End of explanation
<END_TASK> |
15,964 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Graded = 7/8
Step1: & for multiple parameters
Step2: 2) What genres are most represented in the search results?
Edit your previous printout to also display a list of their genres in the format
GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Tip
Step3: ANSWER
Step4: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Is it the same artist who has the largest number of followers?
Step5: ANSWER
Step6: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Step7: Will the world explode if a musician swears?
Get an average popularity for their explicit songs vs. their non-explicit songs.
How many minutes of explicit songs do they have? Non-explicit?
Step8: QUESTION
Step9: 7) Since we're talking about Lils, what about Biggies?
How many total "Biggie" artists are there? How many total "Lil"s?
If you made 1 request every 5 seconds,
how long would it take to download information on all the Lils vs the Biggies?
Step10: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? | <ASSISTANT_TASK:>
Python Code:
import requests
response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50')
Explanation: Graded = 7/8
End of explanation
data = response.json()
data.keys()
artist_data = data['artists']
artist_data.keys()
lil_names = artist_data['items']
#lil_names = list of dictionaries = list of artist name, popularity, type, genres etc
Explanation: & for multiple parameters
End of explanation
for names in lil_names:
if not names['genres']:
print(names['name'], names['popularity'], "there are no genres listed")
else:
print(names['name'], names['popularity'], names["genres"])
#Join all the lists of genres in the dictionary and then count the number of elements in it
Explanation: 2) What genres are most represented in the search results?
Edit your previous printout to also display a list of their genres in the format
GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Tip: "how to join a list Python" might be a helpful search
End of explanation
#ANSWER:
all_genres = []
for artist in lil_names:
print("All genres we've heard of:", all_genres)
#The conditional: None
print("Current artist has:", artist['genres'])
all_genres = all_genres + artist['genres']
#genre_list = ", ".join(artist['genres'])
#print(artist['name'], ":", genre_list)
all_genres.count('dirty soup rap')
all_genres.count('crunk')
#This is bad because dirty south rap shows up four times. We need a unique list of genres
for genre in all_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times")
#To remove duplicates. You need to turn a list into a set.
unique_genres = set(list_with_duplicates)
for genre in unique_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times")
#There is a library that comes with Python called Collections
#Inside of it is a magic thing called Counter.
import collections # will import the whole collections
#you can also type
from collections import Counter
#all_genres is a list of strings of genrs with duplicates
#counter will count all te genres for us
counts = collections.Counter(all_genres)
counts.most_common(4) #will give you the four most common genres
#HOW TO AUTOMATE GETTING ALL THE RESULTS
response = requests.get("https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=10")
small_data = response.json()
small_data['artists']
print(len(small_data['artists']['items'])) #We only get 10 artists
print(data['artists']['total'])
import math
page_count = math.ceil(4502/50)
#math.ceil rounds up
#math.ceil(page_count)
page_count
#First page artists 1-50:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50
#Second page artists 51-100:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=50
#Third page artists 101-150:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=100
#Fourth page artists 151-200:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=150
for page in [0, 1, 2, 3, 4]:
offset = (page) * 50 #because page 2 is 50 and 2-1 = 1 x 50 = 50
print("We are on page", page, "with an offset of", offset)
for page in range(91):
#Get a page
offset = page * 50
print("We are on page", page, "with an offset of", offset)
#Make the request with a changed offset ?offset [offset]
#data = response.json()
#add all our new artists to our list of existing artists
#all_artists = all_artists + data['artists]['items]
print("Successfully retrieved", len(all_artists), "artists")
Explanation: ANSWER:
for artist in artists:
print(artist['name'], artist['popularity'])
if len(artist['genres']) > 0:
genres = ", ".join(artist['genres'])
print("Genre list: ", genres
else:
print("No genres listed")
OR
if len(artist['name']) == 0:
OR
if not len(artist['genres']) == 0:
"-".join(your_list) to join lists
End of explanation
#TA-Stephan:can't just print the names yourself. The code must do it.
for popularity in lil_names:
print(popularity['name'], popularity['popularity'], popularity['followers'])
print("Lil Yachty, Lil Uzi Vert, Lil Jon have the highest popularity ratings besides Lil Wayne, and they do not have the largest number of followers.")
Explanation: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Is it the same artist who has the largest number of followers?
End of explanation
for kim in lil_names:
print(kim['name'], kim['id'])
response = requests.get("https://api.spotify.com/v1/artists/5tth2a3v0sWwV1C7bApBdX/")
kim_data = response.json()
#print(kim_data)
kim_followers = kim_data['followers']
total_kim_followers = kim_followers['total']
#print(total_kim_followers)
for artists in lil_names:
if artists["followers"]["total"] > total_kim_followers:
print(artists['name'], artists['popularity'])
#ANSWER:
for artist in artists:
#print("Looking at", artist['name'])
if artist['name'] == "Lil' Kim":
print("Found Lil Kim")
print(artist['popularity'])
else:
pass
#print("Not Lil Kim")
lil_kim_popularity = 62
for artist in artists:
if artist['popularity'] > lil_kim_popularity:
print(artist['name'], "is more popular with a score of", artist['popularity'])
more_popular_than_lil_kim.append(artist['name'])
else:
print(artist['name'], "is less popular with a score of", artist['popularity'])
print("#### More popular than Lil Kim ####"):
print(artist_name)
more_popular_string = ", ".join(more_popular_than_lil_kim)
print("Artists mroe popular than Lil Kim are:", more_popular_string)
Explanation: ANSWER:
#jonathansoma.com/site/lede/foundations/python-patterns/looping-problems/
second_most_popular_name = ""
second_most_popular_score = 0
for artist in artists:
print("Looking at", artist['name]', "who has a popularity score of", artist['popularity'])
#THE CONDITIONAL aka what you are testing
print("Comparing", artist['popularity'], "to", most_popular_score)
if artist['popularity'] > most_popular_score and artist['name'] != 'Lil Wayne':
OR
if artist['popularity'] > most_popular_score:
if artist['name'] == "Lil Wayne":
print("Nice try, Lil Wayne, we don't care")
else:
print("Not Lil Wayne, updating our notebook")
#The change, aka what you are keeping track of
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, most_popular_score)
But what if more than one person has the highest score?
Aggregation Problem. When you're looping through a series of objects and sometimes you
want to add one of those objects to a different list.
target score = 72
initial condition
second_best_artist = []
for artist in artists:
print("Looking at", artist['name'], "who has a popularity of", )
#2: Conditional. When we want to add someone to our list
if artist['popularity'] == 72:
print("!!! The artist's popularity is 72")
#The Change
#Add that artist to our list
#.append(newthing) is how we do that in Python
second_best_artists.append(artist['name'])
print("Our second best artists are:")
for artist in second_best_artist:
print(artist)
Print a list of Lil's that are more popular than Lil' Kim.
End of explanation
#Let's pick Lil Wayne and Lil Mama because I don't know who most of these people are
wayne_id = "55Aa2cqylxrFIXC767Z865"
response = requests.get("https://api.spotify.com/v1/artists/" + wayne_id + "/top-tracks?country=US")
wayne_data = response.json()
top_wayne_tracks = wayne_data['tracks']
for track in top_wayne_tracks:
print(track["name"])
mama_id = "5qK5bOC6wLtuLhG5KvU17c"
response = requests.get("https://api.spotify.com/v1/artists/" + mama_id + "/top-tracks?country=US")
mama_data = response.json()
top_mama_tracks = mama_data['tracks']
for track in top_mama_tracks:
print(track["name"])
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
End of explanation
wayne_explicit_count = 0
wayne_exp_popularity_count = 0
wayne_ok_count = 0
wayne_ok_popularity_count = 0
wayne_explicit_len = 0
wayne_ok_len = 0
for track in top_wayne_tracks:
print(track['name'], track['explicit'], track['popularity'], track["duration_ms"])
if True:
wayne_explicit_count = wayne_explicit_count + 1
wayne_exp_popularity_count = wayne_exp_popularity_count + int(track['popularity'])
wayne_avg_pop = wayne_exp_popularity_count / wayne_explicit_count
wayne_explicit_len = wayne_explicit_len + int(track["duration_ms"])
if not track['explicit']:
wayne_ok_count = wayne_ok_count + 1
wayne_ok_popularity_count = wayne_ok_popularity_count + track['popularity']
wayne_ok_avg_pop = wayne_ok_popularity_count / wayne_ok_count
wayne_ok_len = wayne_ok_len + track["duration_ms"]
if wayne_explicit_count > 0:
print("The average popularity for Lil Wayne's explicit songs is", wayne_avg_pop)
#1 minute is 60000 milliseconds, who knew?
wayne_explicit_mins = int(wayne_explicit_len) / 60000
print("Lil Wayne has", wayne_explicit_mins, "minutes of explicit songs")
if wayne_ok_count > 0:
print("The average popularity for Lil Wayne's non-explicit songs is", wayne_ok_avg_pop)
wayne_ok_mins = int(wayne_ok_len) / 60000
print("Lil Wayne has", wayne_ok_mins, "minutes of explicit songs")
Explanation: Will the world explode if a musician swears?
Get an average popularity for their explicit songs vs. their non-explicit songs.
How many minutes of explicit songs do they have? Non-explicit?
End of explanation
mama_exp_count = 0
mama_exp_pop_count = 0
mama_ok_count = 0
mama_ok_pop_count = 0
mama_exp_len = 0
mama_ok_len = 0
for track in top_mama_tracks:
print(track['name'], track['explicit'], track['popularity'], track["duration_ms"])
if True:
mama_exp_count = mama_exp_count + 1
mama_exp_pop_count = mama_exp_pop_count + int(track['popularity'])
mama_avg_pop = int(mama_exp_pop_count) / int(mama_exp_count)
mama_exp_len = mama_exp_len + int(track["duration_ms"])
if not track['explicit']:
mama_ok_count = mama_ok_count + 1
mama_ok_pop_count = mama_ok_pop_count + int(track['popularity'])
mama_ok_avg_pop = int(mama_ok_pop_count) / int(mama_ok_count)
mama_ok_len = mama_ok_len + int(track["duration_ms"])
if mama_exp_count > 0:
#1 minute is 60000 milliseconds, who knew?
print("The average popularity for Lil Mama's xplicit songs is", mama_avg_pop)
mama_exp_mins = int(mama_exp_len) / 60000
print("Lil Mama has", mama_exp_mins, "minutes of explicit songs")
if mama_ok_count > 0:
print("The average popularity for Lil Mama's non-explicit songs is", mama_ok_avg_pop)
mama_ok_mins = int(mama_ok_len) / 60000
print("Lil Mama has", mama_ok_mins, "minutes of non-explicit songs")
Explanation: QUESTION: Why does this return both true and not true statements for non-explicit statements?
for track in top_mama_tracks:
print(track['name'], track['explicit'])
if True:
print(track['name'], "is explicit and has a popularity of", track['popularity'])
if not track['explicit']:
print(track['name'], "is not explicit and has a popularity of", track['popularity'])
End of explanation
#We need to bypass the limit. And find out
response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist')
biggie_data = response.json()
biggie_artists = biggie_data['artists']
biggie_names = biggie_artists['items']
biggie_count= 0
for name in biggie_names:
print(name['name'])
biggie_count = biggie_count + 1
print("There are a total number of", biggie_count, "biggie artists")
response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist')
lil_data = response.json()
lil_x_artists = lil_data['artists']
lil_x_names = lil_x_artists['items']
lil_x_count= 0
for name in lil_x_names:
print(name['name'])
lil_x_count = biggie_count + 1
print("There are a total number of", lil_x_count, "lil artists")
Explanation: 7) Since we're talking about Lils, what about Biggies?
How many total "Biggie" artists are there? How many total "Lil"s?
If you made 1 request every 5 seconds,
how long would it take to download information on all the Lils vs the Biggies?
End of explanation
response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
b_data = response.json()
b_artists = b_data['artists']
b_names = b_artists['items']
b_pop_count = 0
b_number = 0
for names in b_names:
print(names['name'], names['popularity'])
b_number = b_number + 1
b_pop_count = b_pop_count + int(names['popularity'])
avg_b_pop = b_pop_count / int(b_number)
print("The Biggies' average popularity is", avg_b_pop)
lil_pop_count = 0
lil_number = 0
for names in lil_names:
print(names['name'], names['popularity'])
lil_number = lil_number + 1
lil_pop_count = lil_pop_count + int(names['popularity'])
avg_lil_pop = lil_pop_count / int(lil_number)
print("The Lils average popularity is", avg_lil_pop)
print("The Lils are far more popular")
Explanation: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
End of explanation
<END_TASK> |
15,965 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Convolutional Neural Networks
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
Step5: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise
Step7: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output | <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
Explanation: Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
Implement helper functions that you will use when implementing a TensorFlow model
Implement a fully functioning ConvNet using TensorFlow
After this assignment you will be able to:
Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks").
1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
End of explanation
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Run the next cell to load the "SIGNS" dataset you are going to use.
End of explanation
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples.
End of explanation
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(shape=[None, n_H0, n_W0, n_C0], dtype=np.float32)
Y = tf.placeholder(shape=[None, n_y], dtype=np.float32)
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1',
[4, 4, 3, 8],
initializer=tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable('W2',
[2, 2, 8, 16],
initializer=tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
Explanation: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
More Info.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1,
ksize=[1, 8, 8, 1],
strides=[1, 8, 8, 1],
padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2,
ksize=[1, 4, 4, 1],
strides=[1, 4, 4, 1],
padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
Explanation: Expected Output:
<table>
<tr>
<td>
W1 =
</td>
<td>
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br>
-0.06847463 0.05245192]
</td>
</tr>
<tr>
<td>
W2 =
</td>
<td>
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br>
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br>
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
</td>
</tr>
</table>
1.2 - Forward propagation
In TensorFlow, there are built-in functions that carry out the convolution steps for you.
tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here
tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here
tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here.
tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here.
tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here.
In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
Exercise:
Implement the forward_propagation function below to build the following model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3,
labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
Explanation: Expected Output:
<table>
<td>
Z3 =
</td>
<td>
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br>
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
</td>
</table>
1.3 - Compute cost
Implement the compute cost function below. You might find these two functions helpful:
tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here.
tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here.
Exercise: Compute the cost below using the function above.
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost],
feed_dict={X: minibatch_X,
Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
Explanation: Expected Output:
<table>
<td>
cost =
</td>
<td>
2.91034
</td>
</table>
1.4 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches.
Exercise: Complete the function below.
The model below should:
create placeholders
initialize parameters
forward propagate
compute the cost
create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables
End of explanation
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
End of explanation
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
End of explanation
<END_TASK> |
15,966 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Softmax exercise
Adapt from the Stanford CS231n assignment1, find the original version on the course website.
In this exercise we will
Step5: Load CIFAR-10 data
Load the data and split into training / validation / testing datasets. See notebook 00 for more details.
Step7: Softmax Classifier
Step8: Sanity Check
In a randomly initiated weight, the loss should be close to $-\log(\frac{1}{10}) = -\log(0.1) \approx 2.303$
Step10: Vectorized loss function
Step12: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss. | <ASSISTANT_TASK:>
Python Code:
import random
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.data_utils import load_CIFAR10
from cs231n.gradient_check import grad_check_sparse
# plotting setting
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Softmax exercise
Adapt from the Stanford CS231n assignment1, find the original version on the course website.
In this exercise we will:
implement a fully-vectorized loss function for the SVM
implement the fully-vectorized expression for its analytic gradient
check your implementation using numerical gradient
use a validation set to tune the learning rate and regularization strength
optimize the loss function with SGD
visualize the final learned weights
End of explanation
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]).T
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]).T
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]).T
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape:', X_train.shape)
print('Train labels shape:', y_train.shape)
print('Validation data shape:', X_val.shape)
print('Validation labels shape:', y_val.shape)
print('Test data shape:', X_test.shape)
print('Test labels shape:', y_test.shape)
class LinearClassifier:
def __init__(self):
self.W = None
def train(
self, X, y,
learning_rate=1e-3, reg=1e-5, num_iters=100, batch_size=200,
verbose=False, seed=None
):
Train this linear classifier using stochastic gradient descent.
Inputs:
- X: D x N array of training data. Each training point is a
D-dimensional column.
- y: 1-dimensional array of length N with labels 0...K-1 for K classes.
- learning_rate: (float) learning rate for optimization.
- reg: (float) regularization strength.
- num_iters: (integer) number of steps to take when optimizing.
- batch_size: (integer) number of training examples to use
at each step.
- verbose: (boolean) If true, print progress during optimization.
Outputs:
A list containing the value of the loss function at each training
iteration.
dim, num_train = X.shape
# assume y takes values 0...K-1 where K is number of classes
num_classes = np.max(y) + 1
if self.W is None:
# lazily initialize W
self.W = np.random.randn(num_classes, dim) * 0.001
batch_rs = np.random.RandomState(seed)
# Run stochastic gradient descent to optimize W
loss_history = []
for it in range(num_iters):
batch_ix = batch_rs.choice(
np.arange(num_train),
size=batch_size, replace=True
)
X_batch = X[:, batch_ix]
y_batch = y[batch_ix]
# evaluate loss and gradient, internally use self.W
loss, grad = self.loss(X_batch, y_batch, reg)
loss_history.append(loss)
# perform parameter update
self.W -= grad * learning_rate
if verbose and it % 100 == 0:
print('iteration %d / %d: loss %f' % (it, num_iters, loss))
return loss_history
def predict(self, X):
Use the trained weights of this linear classifier to predict labels for
data points.
Inputs:
- X: D x N array of training data. Each column is a D-dimensional point.
Returns:
- y_pred: Predicted labels for the data in X. y_pred is a 1-dimensional
array of length N, and each element is an integer giving the predicted
class.
score_pred = self.W.dot(X) # shape: C x N
y_pred = score_pred.argmax(axis=0)
return y_pred
def loss(self, X_batch, y_batch, reg):
Compute the loss function and its derivative.
Subclasses will override this.
Inputs:
- X_batch: D x N array of data; each column is a data point.
- y_batch: 1-dimensional array of length N with labels 0...K-1, for K classes.
- reg: (float) regularization strength.
Returns: A tuple containing:
- loss as a single float
- gradient with respect to self.W; an array of the same shape as W
raise NotImplementedError
Explanation: Load CIFAR-10 data
Load the data and split into training / validation / testing datasets. See notebook 00 for more details.
End of explanation
def softmax_loss_naive(W, X, y, reg):
Softmax loss function, naive implementation (with loops)
Inputs:
- W: C x D array of weights
- X: D x N array of data. Data are D-dimensional columns
- y: 1-dimensional array of length N with labels 0...K-1, for K classes
- reg: (float) regularization strength
Returns:
a tuple of:
- loss as single float
- gradient with respect to weights W, an array of same size as W
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
num_train = X.shape[1]
f = W.dot(X) # shape: C x N
p = np.zeros(num_train, dtype=np.float)
for i in range(num_train):
f_i = f[:, i].copy() # shape C x 1
f_i -= np.max(f_i) # improve numerical stability
f_i = np.exp(f_i)
x_i = X[:, i]
all_class_p_i = f_i / np.sum(f_i)
p[i] = all_class_p_i[y[i]]
# Update gradient
# all_class_p_i no used later, don't copy
dw_x_weight_i = all_class_p_i
dw_x_weight_i[y[i]] -= 1
dW -= dw_x_weight_i[:, np.newaxis] * x_i[np.newaxis, :]
loss += np.mean(-np.log(p))
# Add regularization
loss += 0.5 * reg * np.sum(W * W)
# Gradient
# ref: http://ufldl.stanford.edu/wiki/index.php/Softmax_Regression
dW /= -num_train
dW += reg * W
return loss, dW
Explanation: Softmax Classifier
End of explanation
%%time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(10, 3073) * 0.0001
loss, grad = softmax_loss_naive(W, X_train, y_train, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
%%time
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_train, y_train, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
f = lambda w: softmax_loss_naive(w, X_train, y_train, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 5) # increase 5 here to check for more times
Explanation: Sanity Check
In a randomly initiated weight, the loss should be close to $-\log(\frac{1}{10}) = -\log(0.1) \approx 2.303$
End of explanation
def softmax_loss_vectorized(W, X, y, reg):
Softmax loss function, vectorized version.
Inputs and outputs are the same as softmax_loss_naive.
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
num_train = X.shape[1]
_train_ix = np.arange(num_train) # for sample coord 0...N-1
f = W.dot(X) # shape: C x N
f -= np.max(f, axis=0) # improve numerical stability
f = np.exp(f)
p = f / np.sum(f, axis=0) # shape: C x N
# loss function
loss += np.mean(-np.log(p[y, _train_ix]))
loss += 0.5 * reg * np.sum(W * W)
# gradient
# ref: http://ufldl.stanford.edu/wiki/index.php/Softmax_Regression
dW_x_weight = p # no use p later, don't copy
dW_x_weight[y, _train_ix] -= 1
# CxD -= CxN dot NxD
dW -= dW_x_weight.dot(X.T)
dW /= -num_train
dW += reg * W
return loss, dW
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_train, y_train, 0.00001)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_train, y_train, 0.00001)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses should match but your vectorized implementation should be much faster.
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
Explanation: Vectorized loss function
End of explanation
class Softmax(LinearClassifier):
A subclass that uses the Softmax + Cross-entropy loss function
def loss(self, X_batch, y_batch, reg):
return softmax_loss_vectorized(self.W, X_batch, y_batch, reg)
# Now implement SGD in LinearSVM.train() function and run it with the code below
softmax = Softmax()
tic = time.time()
loss_hist = softmax.train(
X_train, y_train,
learning_rate = 5e-8, reg=1e3,
# better params:
# learning_rate=5e-7, reg=5e4,
num_iters=1500, seed=9527, verbose=True
)
toc = time.time()
print('That took %fs' % (toc - tic))
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
y_train_pred = softmax.predict(X_train)
print('training accuracy: {:.3%}'.format(np.mean(y_train == y_train_pred)))
y_val_pred = softmax.predict(X_val)
print('validation accuracy: {:.3%}'.format(np.mean(y_val == y_val_pred)))
# evaluate on test set
# Evaluate the best svm on test set
y_test_pred = softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
# Visualize the learned weights for each class
w = softmax.W[:,:-1] # strip out the bias
w = w.reshape(10, 32, 32, 3)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
plt.show()
Explanation: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
End of explanation
<END_TASK> |
15,967 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Opinion Game - One Topic in the network
These codes were developed as a part of the our opinion game model.
Import Python built-in functions we need to run and plot the game
Step1: Set up inline matplotlib
This tells Python to plot the figure right in here, in the notebook.
Step2: Import Game Modules From a Given Path
User have to edit the path and put the correct one on his/her machine.
Step3: Setting Up Game Parameters
Step4: seed PRNG
Step5: Set up the state of the system
State of the system includes
Step6: User Defined States Can go in the following cell
Step7: Run The Game
Step8: Reshape the history
Since here we have just one topic, the matrix can be converted to a 2D array, rather than 3.
This makes the plotting job easier
Step9: Convert to Panda DataFrame and then plot.
Step10: Plot the numpy array | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy.random import randn
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.image as mpimg
from matplotlib import rcParams
import seaborn as sb
Explanation: Opinion Game - One Topic in the network
These codes were developed as a part of the our opinion game model.
Import Python built-in functions we need to run and plot the game
End of explanation
%matplotlib inline
#rcParams['figure.figsize'] = 5, 4
sb.set_style('whitegrid')
Explanation: Set up inline matplotlib
This tells Python to plot the figure right in here, in the notebook.
End of explanation
import sys
# search path for modules
sys.path.append('/Users/hn/Documents/GitHub/PyOpinionGame/')
import opiniongame.config as og_cfg
import opiniongame.IO as og_io
import opiniongame.coupling as og_coupling
import opiniongame.state as og_state
import opiniongame.adjacency as og_adj
import opiniongame.selection as og_select
import opiniongame.potentials as og_pot
import opiniongame.core as og_core
import opiniongame.stopping as og_stop
import opiniongame.opinions as og_opinions
Explanation: Import Game Modules From a Given Path
User have to edit the path and put the correct one on his/her machine.
End of explanation
config = og_cfg.staticParameters()
path = '/Users/hn/Documents/GitHub/PyOpinionGame/' # path to the 'staticParameters.cfg'
staticParameters = path + 'staticParameters.cfg'
config.readFromFile(staticParameters)
config.threshold = 0.0001
config.Kthreshold = 0.00001
config.startingseed = 20
config.learning_rate = 0.1
tau = 0.62 #tip of the tent potential function
config.printOut()
Explanation: Setting Up Game Parameters
End of explanation
print("SEEDING PRNG: "+str(config.startingseed))
np.random.seed(config.startingseed)
Explanation: seed PRNG: must do this before any random numbers are ever sampled during default generation
End of explanation
# These are the default matrices for the state of the system:
# If you want to change them, you can generate a new one in the following cell
default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)
default_adj = og_adj.make_adj(config.popSize, 'full')
Explanation: Set up the state of the system
State of the system includes:
Weight Matrix (Matrix of the coupling wieghts between topic)
Initial Opinions of agents
Adjacency matrix of the network
End of explanation
# user_weights =
# user_initialOpinions =
# user_adj =
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
ufuncs = og_cfg.UserFunctions(og_select.FastPairSelection,
og_stop.iterationStop,
og_pot.createTent(tau))
Explanation: User Defined States Can go in the following cell:
End of explanation
#
# run the game
#
state = og_core.run_until_convergence(config, state, ufuncs)
# Extract and save the history of the game
rdict = {}
rdict['history'] = state.history
og_io.saveMatrix('output.mat', rdict)
Explanation: Run The Game
End of explanation
time, population_size, no_of_topics = rdict['history'].shape
evolution = rdict['history'].reshape(time, population_size)
Explanation: Reshape the history
Since here we have just one topic, the matrix can be converted to a 2D array, rather than 3.
This makes the plotting job easier:
End of explanation
df = pd.DataFrame(data=evolution, index=None, columns=None)
fig = plt.figure()
ax = df.plot(legend=False)
ax.set_xlabel("Time")
ax.set_ylabel("Opinions")
ax.set_title('Evolution of Opinions')
plt.show()
Explanation: Convert to Panda DataFrame and then plot.
End of explanation
fig1 = plt.figure()
plt.plot(evolution)
plt.xlabel('Time')
plt.ylabel('Opinionds')
plt.title('Evolution of Opinions')
plt.show()
Explanation: Plot the numpy array
End of explanation
<END_TASK> |
15,968 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
LogGabor user guide
Table of content
What is the LogGabor package?
Installing
Importing the library
Properties of log-Gabor filters
Testing filter generation
Testing on a sample image
Building a pyramid
An example of fitting images with log-Gabor filters
Importing the library
Step1: To install the dependencies related to running this notebook, see Installing notebook dependencies.
Back to top
Step2: Perspectives
Step3: Back to top
performing a fit
Step4: With periodic boundaries, check that the filter "re-enters" the image from the other border
Step5: Back to top
TODO | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from LogGabor import LogGabor
parameterfile = 'https://raw.githubusercontent.com/bicv/LogGabor/master/default_param.py'
lg = LogGabor(parameterfile)
lg.set_size((32, 32))
Explanation: LogGabor user guide
Table of content
What is the LogGabor package?
Installing
Importing the library
Properties of log-Gabor filters
Testing filter generation
Testing on a sample image
Building a pyramid
An example of fitting images with log-Gabor filters
Importing the library
End of explanation
import os
import numpy as np
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
%matplotlib inline
import matplotlib.pyplot as plt
fig_width = 12
figsize=(fig_width, .618*fig_width)
Explanation: To install the dependencies related to running this notebook, see Installing notebook dependencies.
Back to top
End of explanation
def twoD_Gaussian(xy, x_pos, y_pos, theta, sf_0):
FT_lg = lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta)
return lg.invert(FT_lg).ravel()
# Create x and y indices
x = np.arange(lg.pe.N_X)
y = np.arange(lg.pe.N_Y)
x, y = xy = np.meshgrid(x, y)
#create data
x_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1
data = twoD_Gaussian(xy, x_pos, y_pos, theta=theta, sf_0=sf_0)
# plot twoD_Gaussian data generated above
#plt.figure()
#plt.imshow(data.reshape(lg.pe.N_X, lg.pe.N_Y))
#plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
data /= np.abs(data).max()
data_noisy = data + .25*np.random.normal(size=data.shape)
# getting best match
C = lg.linear_pyramid(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y))
idx = lg.argmax(C)
initial_guess = [idx[0], idx[1], lg.theta[idx[2]], lg.sf_0[idx[3]]]
print ('initial_guess :', initial_guess, ', idx :', idx)
import scipy.optimize as opt
popt, pcov = opt.curve_fit(twoD_Gaussian, xy, data_noisy, p0=initial_guess)
data_fitted = twoD_Gaussian(xy, *popt)
extent = (0, lg.pe.N_X, 0, lg.pe.N_Y)
print ('popt :', popt, ', true : ', x_pos, y_pos, theta, sf_0)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)
_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
for ax in axs: ax.axis('equal')
Explanation: Perspectives: Better fits of the filters
Basically, it is possible to infer the best possible log-Gabor function, even if it's parameters do not fall on the grid
Defining a reference log-gabor (look in the corners!)
End of explanation
from LogGabor import LogGaborFit
lg = LogGaborFit(parameterfile)
lg.set_size((32, 32))
x_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1
data = lg.invert(lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta))
data /= np.abs(data).max()
data_noisy = data + .25*np.random.normal(size=data.shape)
data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y))
data_fitted.shape
params.pretty_print()
extent = (0, lg.pe.N_X, 0, lg.pe.N_Y)
print ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)
_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
for ax in axs: ax.axis('equal')
Explanation: Back to top
performing a fit
End of explanation
data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), do_border=False)
extent = (0, lg.pe.N_X, 0, lg.pe.N_Y)
print ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)
_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
for ax in axs: ax.axis('equal')
Explanation: With periodic boundaries, check that the filter "re-enters" the image from the other border:
End of explanation
%load_ext watermark
%watermark -i -h -m -v -p numpy,matplotlib,scipy,imageio,SLIP,LogGabor -r -g -b
Explanation: Back to top
TODO: validation of fits
Back to top
more book keeping
End of explanation
<END_TASK> |
15,969 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Porting Bike-Sharing project-1 to RNN
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Lets build the get_batches
the input of each time step will be one row of data ( one hour prediction array of 53 features)
Step8: Lets test the batches above
Step9: Build the network
Lets build a RNN with tensorflow
Inputs
Step10: Lstm
Step11: Output
Step12: Validation Accuracy
Step13: Trainning
Step14: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step15: Create a graph to compare the data and predictions | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Porting Bike-Sharing project-1 to RNN
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
train_targets.head()
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
# each element of x is an array with 53 features and each element of y is an array with 3 targets
# each x is one hour features
def get_batches(x, y, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
array x and array y: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of hours per batch and number of batches we can make
hours_per_batch = n_seqs * n_steps
n_batches = len(x)//hours_per_batch
# convert from Pandas to np remove the index column
x = x.reset_index().values[:,1:]
y = y.reset_index().values[:,1:]
# make only full batches
x, y = x[:n_batches*hours_per_batch], y[:n_batches*hours_per_batch]
# TODO: this needs to be optmized
# x_temp will be ( n rows x n_steps wide) where each element is an array of 53 features
# this first look splits the x with n rows and n_steps wide
x_temp = []
y_temp = []
for st in range(0, n_batches*hours_per_batch, n_steps ):
x_temp.append( x[st:st+n_steps] )
y_temp.append( y[st:st+n_steps] )
x = np.asarray(x_temp )
y = np.asarray(y_temp )
# this splits x in n_seqs rows so the return is a batch of n_seqs rows with n_steps wide
# where each element is an array of 53 features (one hour from our data)
for sq in range(0,(n_batches*hours_per_batch)//n_steps, n_seqs ):
yield x[sq:sq+n_seqs,:,:], y[sq:sq+n_seqs,:,:]
Explanation: Lets build the get_batches
the input of each time step will be one row of data ( one hour prediction array of 53 features)
End of explanation
print(train_features.tail())
batches = get_batches(train_features, train_targets, 20, 96)
x, y = next(batches)
print(x.shape)
# x, y = next(batches)
# print(x.shape)
Explanation: Lets test the batches above
End of explanation
import tensorflow as tf
num_features = 56
num_targets = 3
batch_size = 10
# one step for each hour that we want the sequence to remember
num_steps = 50
lstm_size = 256
num_layers = 2
learning_rate = 0.0005
keep_prob_val = 0.75
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.float32, [batch_size, None, num_features], name='inputs')
targets = tf.placeholder(tf.float32, [batch_size, None, num_targets], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
learningRate = tf.placeholder(tf.float32, name='learningRate')
Explanation: Build the network
Lets build a RNN with tensorflow
Inputs
End of explanation
# # Use a basic LSTM cell
# lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# # Add dropout to the cell
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# # Stack up multiple LSTM layers, for deep learning
# #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
# initial_state = cell.zero_state(batch_size, tf.float32)
#Replaced the code above because TF with GPU was complaining
def lstm_cell():
cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: Lstm
End of explanation
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
# this is one thing that I still dont fully understood, I had to set the activation_fn=None so the
# fully connected layer dont use any activation funcition, this seems to work
predictions = tf.contrib.layers.fully_connected(outputs, 3, activation_fn=None)
cost = tf.losses.mean_squared_error(targets, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
End of explanation
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), tf.cast(tf.round(targets), tf.int32))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation Accuracy
End of explanation
epochs = 100
saver = tf.train.Saver()
#validation accuracy to plot
val_accuracy=[]
training_loss=[]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_features, train_targets, batch_size, num_steps), 1):
feed = {inputs: x,
targets: y,
keep_prob: keep_prob_val,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
training_loss.append(loss)
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_features, val_targets, batch_size, num_steps):
feed = {inputs: x,
targets: y,
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
val_accuracy.append( np.mean(val_acc) )
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/bike-sharing.ckpt")
plt.plot(val_accuracy, label='Accuracy')
plt.legend()
_ = plt.ylim()
plt.plot(training_loss, label='Loss')
plt.legend()
_ = plt.ylim()
Explanation: Trainning
End of explanation
test_acc = []
#with tf.Session(graph=graph) as sess:
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_features, test_targets, batch_size, num_steps), 1):
feed = {inputs: x,
targets: y,
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
batch = get_batches(test_features, test_targets, batch_size, num_steps)
x,y = next(batch)
feed = {inputs: x,
targets: y,
keep_prob: 1,
initial_state: test_state}
pred = sess.run([predictions], feed_dict=feed)
pred = pred[0].reshape(500,-1)
pred[:,0] *= std
pred[:,0] += mean
lf = pred[:,0]
# predictions = network.run(test_features).T*std + mean
ax.plot(lf, label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(lf))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Create a graph to compare the data and predictions
End of explanation
<END_TASK> |
15,970 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ЦАО
Step1: Собираем ссылки на все квартиры первых тридцати страниц выдачи
Step2: Cтандартный блок, в котором мы получаем по ссылке текст страницы в удобном формате
Step3: Prices
Функция для извлечения цены квартиры из странички, взято из лекции
Step4: Функция для сбора всех цен с сайта в выбранном диапазоне ссылок ( l < r )
Step5: Теперь используем эти функции и составим колонку для всех цен
Step6: Dist
Извлечем координаты так же, как это было продемонстировано на лекции
Step7: Определим функцию получения координат из текста страницы
Step8: И функцию для получения всех координат в выбранном диапазоне ссылок (такая схема из двух функций будет повторяться везде)
Step9: Теперь нам предстоит использовать полученные координаты для вычисления расстояния до центра города. Воспользуемся для этого формулой для вычисления расстояния между двумя точками, расположенными на сфере. Запишем отдельную функцию
Step10: Rooms
Перейдем к числу комнат, тоже сделано по аналогии с лекционным ноутбуком
Step11: Страницы о многокомнатных квартирах ( > 5 ) не содержат информации о количестве комнат в явном виде, такие квартиры фигурируют как "многокомнатные". Так что соответствующие объекты у меня получат в столбец Rooms значение 'mult'. Это более достоверная информация, чем, например, 6.
Step12: Metrdist и Walking
Теперь начнем разбираться с расстоянием до метро, заодно и со способом до него добраться (пешком или на машине)
Step13: Тут у нас появляется отличие от того, что было раньше
Step14: Walking
Аналогично поступим с колонкой walking
Step15: Теперь перейдем к работе с данными, которые мы будем извлекать из таблицы. Это будущие колонки
Step16: Информация по Brick и New содержится между следующими двумя обязательными полями
Step17: Ситуация следующая
Step18: New
Теперь перейдем к New - тут либо один вариант из двух (1 - новостройка, 0 - вторичка), либо nan.
Step19: Теперь переходим дальше
Floor, Nfloors
Step20: Информация по Floor и Nfloors тоже содержится между обязательными полями "Этаж" и "Тип продажи".
Step21: Floor указан обязательно, и он лежит во второй ячейке, Nfloors может не быть, но если есть - то он в третьей.
Floor
Step22: Nfloors
Step23: Перейдем дальше
Totsp, Livesp
Step24: Всех полей, что находятся справа от "Общая площадь", может не быть. Так что ограничим нашу область поиска только слева.
Step25: Totsp и Livesp будут обязательно. Тут мы столкнемся с проблемой
Step26: Totsp
Step27: Livesp
Step28: Переходим к опциональным полям
Kitsp, Bal, Tel
Step29: Нужный нам подблок информации мы снова можем ограничить только слева названием поля "Общая площадь"
Step30: Kitsp
Если площадь кухни не указана, то она фигурирует в таблице в виде прочерка. При этом она может быть float с запятой в качестве разделителя, как и предыдущие площади. Учтем это
Step31: Bal
Балкона может не быть, балкон может быть, а еще балконов может быть больше одного. Информации о балконе может не быть.
Так что будем готовы к значениям > 1. Прочерк будет говорить о том, что информации нет, слово "нет" - о том, что есть информация о том, что нет балкона.
Step32: Tel
Телефон может быть, может не быть, и о нем может не быть информации. О том, что он есть, нам сообщает слово "да", о том, что его нет - слово "нет".
Step33: Последние штрихи перед сборкой DataFrame
N
Step34: В этом ноутбуке квартиры только из ЦАО, укажу это
District
Step35: Собираем все воедино
DataFrame | <ASSISTANT_TASK:>
Python Code:
district = 'http://www.cian.ru/cat.php?deal_type=sale&district%5B0%5D=13&district%5B1%5D=14&district%5B2%5D=15&district%5B3%5D=16&district%5B4%5D=17&district%5B5%5D=18&district%5B6%5D=19&district%5B7%5D=20&district%5B8%5D=21&district%5B9%5D=22&engine_version=2&offer_type=flat&p={}&room1=1&room2=1&room3=1&room4=1&room5=1&room6=1'
Explanation: ЦАО
End of explanation
links = []
for page in range(1, 30):
page_url = district.format(page)
search_page = requests.get(page_url)
search_page = search_page.content
search_page = BeautifulSoup(search_page, 'lxml')
flat_urls = search_page.findAll('div', attrs = {'ng-class':"{'serp-item_removed': offer.remove.state, 'serp-item_popup-opened': isPopupOpen}"})
flat_urls = re.split('http://www.cian.ru/sale/flat/|/" ng-class="', str(flat_urls))
for link in flat_urls:
if link.isdigit():
links.append(link)
Explanation: Собираем ссылки на все квартиры первых тридцати страниц выдачи
End of explanation
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[0]) + '/'
#flat_url = 'http://www.cian.ru/sale/flat/150531912/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
Explanation: Cтандартный блок, в котором мы получаем по ссылке текст страницы в удобном формате
End of explanation
def getPrice(flat_page):
price = flat_page.find('div', attrs={'class':'object_descr_price'})
price = re.split('<div>|руб|\W', str(price))
price = "".join([i for i in price if i.isdigit()][-4:])
return int(price)
Explanation: Prices
Функция для извлечения цены квартиры из странички, взято из лекции
End of explanation
def getAllPrices(l, r):
prices = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
prices.append(getPrice(flat_page))
return prices
Explanation: Функция для сбора всех цен с сайта в выбранном диапазоне ссылок ( l < r )
End of explanation
prices = getAllPrices(0, len(links))
Explanation: Теперь используем эти функции и составим колонку для всех цен
End of explanation
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[0]) + '/'
#flat_url = 'http://www.cian.ru/sale/flat/150531912/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
coords = flat_page.find('div', attrs={'class':'map_info_button_extend'}).contents[1]
coords = re.split('&|center=|%2C', str(coords))
coords
coords_list = []
for item in coords:
if item[0].isdigit():
coords_list.append(item)
lat = float(coords_list[0])
lon = float(coords_list[1])
lat
lon
Explanation: Dist
Извлечем координаты так же, как это было продемонстировано на лекции
End of explanation
def getCoords_at(flat_page):
coords = flat_page.find('div', attrs={'class':'map_info_button_extend'}).contents[1]
coords = re.split('&|center=|%2C', str(coords))
coords_list = []
for item in coords:
if item[0].isdigit():
coords_list.append(item)
lat = float(coords_list[0])
lon = float(coords_list[1])
return lat, lon
Explanation: Определим функцию получения координат из текста страницы
End of explanation
def getAllCoordinates(l, r):
coordinates = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
coordinates.append(getCoords(flat_page))
return coordinates
coordinates = getAllCoordinates(0, len(links))
Explanation: И функцию для получения всех координат в выбранном диапазоне ссылок (такая схема из двух функций будет повторяться везде)
End of explanation
from math import radians, cos, sin, asin, sqrt
AVG_EARTH_RADIUS = 6371
def haversine(point1, point2):
# извлекаем долготу и широту
lat1, lng1 = point1
lat2, lng2 = point2
# переводим все эти значения в радианы
lat1, lng1, lat2, lng2 = map(radians, (lat1, lng1, lat2, lng2))
# вычисляем расстояние по формуле
lat = lat2 - lat1
lng = lng2 - lng1
d = sin(lat * 0.5) ** 2 + cos(lat1) * cos(lat2) * sin(lng * 0.5) ** 2
h = 2 * AVG_EARTH_RADIUS * asin(sqrt(d))
return h
MSC_POINT_ZERO = (55.755831, 37.617673)
distance = []
for i in range(0, len(coordinates)):
distance.append(haversine(MSC_POINT_ZERO, coordinates[i]))
Explanation: Теперь нам предстоит использовать полученные координаты для вычисления расстояния до центра города. Воспользуемся для этого формулой для вычисления расстояния между двумя точками, расположенными на сфере. Запишем отдельную функцию
End of explanation
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
#flat_url = 'http://www.cian.ru/sale/flat/150844464/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
rooms_n = flat_page.find('div', attrs={'class':'object_descr_title'})
rooms_n = html_stripper(rooms_n)
rooms_n
re.split('-|\n', rooms_n)
Explanation: Rooms
Перейдем к числу комнат, тоже сделано по аналогии с лекционным ноутбуком
End of explanation
def getRoom(flat_page):
rooms_n = flat_page.find('div', attrs={'class':'object_descr_title'})
rooms_n = html_stripper(rooms_n)
room_number = ''
flag = 0
for i in re.split('-|\n', rooms_n):
if 'много' in i:
flag = 1
break
elif 'комн' in i:
break
else:
room_number += i
if (flag):
room_number = 'mult'
room_number = "".join(room_number.split())
return room_number
def getAllRooms(l, r):
rooms = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
rooms.append(getRoom(flat_page))
return rooms
rooms = getAllRooms(0, len(links))
Explanation: Страницы о многокомнатных квартирах ( > 5 ) не содержат информации о количестве комнат в явном виде, такие квартиры фигурируют как "многокомнатные". Так что соответствующие объекты у меня получат в столбец Rooms значение 'mult'. Это более достоверная информация, чем, например, 6.
End of explanation
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
metro
re.split('metro_name|мин', str(metro))
Explanation: Metrdist и Walking
Теперь начнем разбираться с расстоянием до метро, заодно и со способом до него добраться (пешком или на машине)
End of explanation
def getMetroDistance(flat_page):
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
if (len(metro) > 2): # если оба поля не были заполнены, то предыдущий сплит даст размерность 2
metro_dist = 0
power = 0
# кусок metro[1] после сплита будет содержать в конце кучу хлама, потом количество минут (если есть)
flag = 0
for i in range(0, len(metro[1])):
if metro[1][-i-1].isdigit():
flag = 1
metro_dist += int(metro[1][-i-1]) * 10 ** power
power += 1
elif (flag == 1):
break
else:
metro_dist = np.nan
return metro_dist
def getAllMetroDistances(l, r):
metro_distance = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
metro_distance.append(getMetroDistance(flat_page))
return metro_distance
metro_distances = getAllMetroDistances(0, len(links))
Explanation: Тут у нас появляется отличие от того, что было раньше: некоторые поля на Циане являются обязательными для заполнения, а некоторые нет, и вот эти поля обязательными не являются. Класть в данные по квартире, по которой нет этой информации, дефолтный 0 не будет правильным, потому что люди могли не указать как 3 минуты пешком (ну мало ли, лень было вписывать что-то в ячейки, квартира и так продастся), а могли не указать 30 минут на машине. Если уравнивать такие вещи (и тем более все их забивать нулями), то позже это плохо скажется на модели. Так что будем класть в такие ячейки nan'ы.
Metrdist
Функция для извлечения расстояния до метро в минутах
End of explanation
def getMetroWalking(flat_page):
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
if (len(metro) > 2): # если оба поля не были заполнены, то предыдущий сплит даст размерность 2
if 'пешк' in metro[2]:
walking = 1
elif 'машин' in metro[2]:
walking = 0
else:
walking = np.nan # да, проверка на то, отсутствовали ли оба поля была. мне лично не попадались ситуации, где бы не
# было заполнено только значение поля "пешком/на машине", но вдруг они есть? на такой случай проверка
else:
walking = np.nan
return walking
def getAllMetroWalking(l, r):
metro_walking = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
metro_walking.append(getMetroWalking(flat_page))
return metro_walking
walking = getAllMetroWalking(0, len(links))
Explanation: Walking
Аналогично поступим с колонкой walking
End of explanation
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
Explanation: Теперь перейдем к работе с данными, которые мы будем извлекать из таблицы. Это будущие колонки:
Bricks - кирпичный ли дом, монолитный (варианта жб на Циане нет)/иное
New - новостройка/вторичка
Totsp, Livsp, Kitsp - общая, жилая и площадь кухни соответственно
Tel - наличие телефона
Bal - наличие балкона
Floor - этаж
Nfloors - всего этажей в доме.
Среди вышеперечисленного обязательны к заполнению только поля Totsp, Livsp и Floor. В остальных местах могут быть nan'ы.
Bricks
End of explanation
building_block = re.split('Этаж|Тип продажи', table)[1]
building_block
Explanation: Информация по Brick и New содержится между следующими двумя обязательными полями: "Этаж" и "Тип продажи" (то есть на наличие этих слов в этом месте можно положиться)
End of explanation
def getBrick(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
brick = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
if 'Тип дом' in building_block:
if (('кирпич' in building_block) | ('монолит' in building_block)):
brick = 1
elif (('панельн' in building_block) | ('деревян' in building_block) | ('сталин' in building_block) |
('блочн' in building_block)):
brick = 0
return brick
def getAllBricks(l, r):
bricks = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
bricks.append(getBrick(flat_page))
return bricks
bricks = getAllBricks(0, len(links))
Explanation: Ситуация следующая: поле "Тип дома" может быть, его может не быть (тогда brick = nan). Если оно есть, то оно может содержать информацию по полю Brick, а может и не содержать. Поле Brick может задавать два значения: 1- если это кирпичный или монолитный (или кирпично-монолитный), и 0 - если это что-то другое (и мы заранее можем посмотреть на сайте на то, какие еще бывают варианты: панельный, блочный, деревянный и сталинский). Запишем эти размышления в функцию
End of explanation
def getNew(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
new = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
if 'Тип дом' in building_block:
if 'новостр' in building_block:
new = 1
elif 'втор' in building_block:
new = 0
return new
def getAllNew(l, r):
new = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
new.append(getNew(flat_page))
return new
new = getAllNew(0, len(links))
Explanation: New
Теперь перейдем к New - тут либо один вариант из двух (1 - новостройка, 0 - вторичка), либо nan.
End of explanation
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
Explanation: Теперь переходим дальше
Floor, Nfloors
End of explanation
building_block = re.split('Этаж|Тип продажи', table)[1]
building_block
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
floor_block
Explanation: Информация по Floor и Nfloors тоже содержится между обязательными полями "Этаж" и "Тип продажи".
End of explanation
def getFloor(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
floor_is = 0
building_block = re.split('Этаж|Тип продажи', table)[1]
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
for i in range(1, len(floor_block[2]) + 1):
if(floor_block[2][-i].isdigit()):
floor_is += int(floor_block[2][-i]) * 10**(i - 1)
return floor_is
def getAllFloors(l, r):
floors = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
floors.append(getFloor(flat_page))
return floors
floors = getAllFloors(0, len(links))
Explanation: Floor указан обязательно, и он лежит во второй ячейке, Nfloors может не быть, но если есть - то он в третьей.
Floor
End of explanation
def getNFloor(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
floors_count = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
if floor_block[3].isdigit():
floors_count = int(floor_block[3])
return floors_count
def getAllNFloors(l, r):
nfloors = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
nfloors.append(getNFloor(flat_page))
return nfloors
nfloors = getAllNFloors(0, 20)
Explanation: Nfloors
End of explanation
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
Explanation: Перейдем дальше
Totsp, Livesp
End of explanation
space_block = re.split('Общая площадь', table)[1]
space_block
Explanation: Всех полей, что находятся справа от "Общая площадь", может не быть. Так что ограничим нашу область поиска только слева.
End of explanation
def myStrToFloat(string):
delimiter = 0
value = 0
for i in range(0, len(string)):
if string[i] == ',':
delimiter = i
for i in range(0, delimiter):
value += int(string[delimiter - i - 1]) * 10 ** i
for i in range(1, len(string) - delimiter):
value += (int(string[delimiter + i]) * (10 ** (i - 2)))
return value
Explanation: Totsp и Livesp будут обязательно. Тут мы столкнемся с проблемой: Циан записывает float через запятую, Python - через точку, поэтому дефолтная float(str) не сработает. Набросаем свой конвертор.
End of explanation
def getTotsp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
total = re.split('Площадь комнат', space_block)[0]
total_space = re.split('\n|\xa0', total)[2]
if total_space.isdigit():
total_space = int(total_space)
else:
total_space = myStrToFloat(total_space)
return total_space
def getAllTotsp(l, r):
totsp = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
totsp.append(getTotsp(flat_page))
return totsp
totsp = getAllTotsp(0, len(links))
Explanation: Totsp
End of explanation
def getLivesp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
living = re.split('Жилая площадь', space_block)[1]
living_space = re.split('\n|\xa0', living)[2]
if living_space.isdigit():
living_space = int(living_space)
else:
living_space = myStrToFloat(living_space)
return living_space
def getAllLivesp(l, r):
livesp = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
livesp.append(getLivesp(flat_page))
return livesp
livesp = getAllLivesp(0, len(links))
Explanation: Livesp
End of explanation
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
Explanation: Переходим к опциональным полям
Kitsp, Bal, Tel
End of explanation
space_block = re.split('Общая площадь', table)[1]
space_block
optional_block = re.split('Жилая площадь', space_block)[1]
optional_block
Explanation: Нужный нам подблок информации мы снова можем ограничить только слева названием поля "Общая площадь"
End of explanation
def getKitsp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
kitchen_space = np.nan
if 'Площадь кухни' in optional_block:
kitchen_block = re.split('Площадь кухни', optional_block)[1]
if re.split('\n|\xa0', kitchen_block)[2] != '–':
if re.split('\n|\xa0', kitchen_block)[2].isdigit():
kitchen_space = int(re.split('\n|\xa0', kitchen_block)[2])
else:
kitchen_space = myStrToFloat(re.split('\n|\xa0', kitchen_block)[2])
return kitchen_space
def getAllKitsp(l, r):
kitsp = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
kitsp.append(getKitsp(flat_page))
return kitsp
kitsp = getAllKitsp(0, len(links))
Explanation: Kitsp
Если площадь кухни не указана, то она фигурирует в таблице в виде прочерка. При этом она может быть float с запятой в качестве разделителя, как и предыдущие площади. Учтем это
End of explanation
def getBal(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
balcony = np.nan
if 'Балкон' in optional_block:
balcony_block = re.split('Балкон', optional_block)[1]
if re.split('\n', balcony_block)[1] != 'нет':
if re.split('\n', balcony_block)[1] != '–':
balcony = int(re.split('\n', balcony_block)[1][0])
else:
balcony = 0
return balcony
def getAllBal(l, r):
bal = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
bal.append(getBal(flat_page))
return bal
bal = getAllBal(0, len(links))
Explanation: Bal
Балкона может не быть, балкон может быть, а еще балконов может быть больше одного. Информации о балконе может не быть.
Так что будем готовы к значениям > 1. Прочерк будет говорить о том, что информации нет, слово "нет" - о том, что есть информация о том, что нет балкона.
End of explanation
def getTel(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
telephone = np.nan
if 'Телефон' in optional_block:
telephone_block = re.split('Телефон', optional_block)[1]
if re.split('\n', telephone_block)[1] == 'да':
telephone = 1
elif re.split('\n', telephone_block)[1] == 'нет':
telephone = 0
return telephone
def getAllTel(l, r):
tel = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
tel.append(getTel(flat_page))
return tel
tel = getAllTel(0, len(links))
Explanation: Tel
Телефон может быть, может не быть, и о нем может не быть информации. О том, что он есть, нам сообщает слово "да", о том, что его нет - слово "нет".
End of explanation
N = []
for i in range(0, len(links)):
N.append(i)
Explanation: Последние штрихи перед сборкой DataFrame
N
End of explanation
district = []
for i in range(0, len(links)):
district.append('CAD')
Explanation: В этом ноутбуке квартиры только из ЦАО, укажу это
District
End of explanation
data = dict([('New', new), ('Bal', bal), ('Tel', tel), ('Walk', walk), ('Metrdist', metrdist), ('Nfloors', nfloors), ('Floor', floor), ('Totsp', totsp), ('Livesp', livesp), ('Kitsp', kitsp), ('N', N), ('Price', prices), ('Rooms', rooms), ('Distance', distance), ('Brick', bricks), ('District', district)])
df = pd.DataFrame(data)
df.T
df.to_csv('cian.csv', index=False)
Explanation: Собираем все воедино
DataFrame
End of explanation
<END_TASK> |
15,971 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Table of Contents
<p><div class="lev1"><a href="#Data-preparation"><span class="toc-item-num">1 - </span>Data preparation</a></div><div class="lev1"><a href="#Data-pipeline"><span class="toc-item-num">2 - </span>Data pipeline</a></div><div class="lev2"><a href="#Selecting-appliances"><span class="toc-item-num">2.1 - </span>Selecting appliances</a></div><div class="lev2"><a href="#Selecting-time-sequences"><span class="toc-item-num">2.2 - </span>Selecting time sequences</a></div><div class="lev2"><a href="#Selecting-houses"><span class="toc-item-num">2.3 - </span>Selecting houses</a></div><div class="lev2"><a href="#Dealing-with-unbalanced-data
Step1: Data preparation
We have power data for 6 houses from main meters that measure the aggregate power consumption of the house and from each appliance separately. For example here we show the aggregated data (sum of all appliances in the house) for house 2.
Step2: One contribution to this aggregated data is the cycles of the fridge plotted below.
Step3: Of course there are many other appliances that contributes to the aggregated data observed above. The list of appliances listed in this house is shown here
Step4: The sum of all these appliances is highly correlated to the aggregated data.
Step5: We can visualize this looking at the main meters for one day and the sum of all the appliances for one day.
Step6: The correlation is high because the 9 appliances listed above consume a significant proportion of the energy and represent a range of different power 'signatures'. Here is one signature from the fridge and one signature from the microwave.
Step7: Obviously the microwave signature is a pic of short duration while the fridge cycle is a bit longer. In this study we try to retrieve from the aggregated data the times an appliance switches on, the times it swithes off and the average power comsuption in each cycle. We call this the rectangle method. Essentially, we want to retrieve from the aggregated data, this type of representation
Step8: Data pipeline
Selecting appliances
We train each neural network per appliance. This is different from the CO and FHMM methods.
Selecting time sequences
We down sample the main meters and the submeters to 6 samples per seconds in order to have the aggregated and the submeter sequences properly aligned. We throw away any activation shorter than some threshold duration to avoid spurious spikes.
Selecting houses
We choose to train the algorithm on house 1,2,3 and 6 and test the data on house 6.
Dealing with unbalanced data
Step9: Standardisation of the input data (aggregated data)
A typical step in the data pipeline of neural network is the standardization of data. For each sequences of 512 samples (= 85 seconds) we substract the mean to center the sequence. Furthermore every input sequence is divided by the standard deviation of a random sample in the training set. In this case we cannot divide each sequence by its own standard deviation because it would delete information about the scale of the signal.
This is done via
Step11: Output data (start time, end time and average power)
The output of the neural network is 3 neurons
Step14: Implementation strategy for real time data augmentation
While the neural network runs an NVIDIA GeForce GT 750M (GPU) we maintain the CPU busy doing the data augmentation in real time (load aggregated data, create the synthetic data, preprocess the mini-batch to be fed to the neural network). For this we create a python generator that creates a queue of 50 mini-batch and feed them successively to the GPU for training.
The pipeline class can be found in neuralnilm.data.datapiline at https
Step15: Network architecture
We use a convolutional neural network (ConvNet) to take advantage of the translation invariance. We want the ConvNet to recognize target appliance activation anywhere in the sequence. For this project we have tried multiple architecture that are reported later on. These architecture all have a first convolutional layer of filter size 3 and stride 1. We have played with both the filter size and the number of output filters on the first layer. We have found that 16 filters is a reasonable number -- increasing the number of filters in the first layer did not improve significantly the scores.
The best neural network we found consist of
Input layer
Step16: We found by experimenting that the best optimizer is Adam (http
Step17: where dw is the gradient of the lost function with respect to the considered weight and w the considered weight. The hyperparameter for this optimizer are beta1, beta2 and eps. We set beta_1=0.9, beta_2=0.999, epsilon=1e-08 and the learning rate = 1e-3.
Experimenting with ConvNet architectures
For the implementation of the ConvNet we use Keras (http
Step18: One convolutional layer
Here we present the results from the ConvNet detailed in the previous cell. We first load the ConvNet architecture, the weights and the history of the training and validation losses (the network has been trained previously using GPU).
Step19: This is a very good F1 score, much better than with any other method tested before. Here we show the evolution of the training loss and the validation loss.
Step20: The training and validation loss during training decreases simulateanously. There is no overfitting. We stop the simulation after 20 epochs but the model was still improving. 20 epochs on GPU took about 1.5 hours.
Two convolutional layers
Since we are not overfitting we did not add any Pooling layer or Dropout to the previous network. However we experimented an other network with an additional convolutional layer and train the model over 30 epoch. The model is the following
Step21: There are 2 convolutional layers with 16 filters and 2 dense layers and the output layer. In this case the results are fairly similar to the network presented above.
Step22: Results for the microwave
Here we run the ConvNet with one layer on the microwave. We train using house 1 and 2 and test on house 5. | <ASSISTANT_TASK:>
Python Code:
# for plotting purposes
%matplotlib inline
from matplotlib.pylab import plt
from matplotlib import rcParams
dark_colors = ["#A51C30", "#808080",
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843),
(0.4, 0.4, 0.4)]
# dark_colors = ['#A51C30', '#808080']
rcParams['figure.figsize'] = (13, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.color_cycle'] = dark_colors
rcParams['lines.linewidth'] = 2
rcParams['axes.facecolor'] = "white"
rcParams['axes.titlesize'] = 20
rcParams['axes.labelsize'] = 17.5
rcParams['xtick.labelsize'] = 15
rcParams['ytick.labelsize'] = 15
rcParams['legend.fontsize'] = 17.5
rcParams['patch.edgecolor'] = 'none'
rcParams['grid.color']="gray"
rcParams['grid.linestyle']="-"
rcParams['grid.linewidth'] = 0.3
rcParams['grid.alpha']=1
rcParams['text.color'] = "444444"
rcParams['axes.labelcolor'] = "444444"
rcParams['ytick.color'] = "444444"
rcParams['xtick.color'] = "444444"
Explanation: Table of Contents
<p><div class="lev1"><a href="#Data-preparation"><span class="toc-item-num">1 - </span>Data preparation</a></div><div class="lev1"><a href="#Data-pipeline"><span class="toc-item-num">2 - </span>Data pipeline</a></div><div class="lev2"><a href="#Selecting-appliances"><span class="toc-item-num">2.1 - </span>Selecting appliances</a></div><div class="lev2"><a href="#Selecting-time-sequences"><span class="toc-item-num">2.2 - </span>Selecting time sequences</a></div><div class="lev2"><a href="#Selecting-houses"><span class="toc-item-num">2.3 - </span>Selecting houses</a></div><div class="lev2"><a href="#Dealing-with-unbalanced-data:-selecting-aggregated-data-windows"><span class="toc-item-num">2.4 - </span>Dealing with unbalanced data: selecting aggregated data windows</a></div><div class="lev2"><a href="#Synthetic-aggregated-data"><span class="toc-item-num">2.5 - </span>Synthetic aggregated data</a></div><div class="lev1"><a href="#Standardisation-of-the-input-data-(aggregated-data)"><span class="toc-item-num">3 - </span>Standardisation of the input data (aggregated data)</a></div><div class="lev1"><a href="#Output-data-(start-time,-end-time-and-average-power)"><span class="toc-item-num">4 - </span>Output data (start time, end time and average power)</a></div><div class="lev1"><a href="#Scores---performance-evaluation-of-the-neural-network"><span class="toc-item-num">5 - </span>Scores - performance evaluation of the neural network</a></div><div class="lev1"><a href="#Implementation-strategy-for-real-time-data-augmentation"><span class="toc-item-num">6 - </span>Implementation strategy for real time data augmentation</a></div><div class="lev1"><a href="#Network-architecture"><span class="toc-item-num">7 - </span>Network architecture</a></div><div class="lev1"><a href="#Loss-function-and-optimizer"><span class="toc-item-num">8 - </span>Loss function and optimizer</a></div><div class="lev2"><a href="#Loss-function"><span class="toc-item-num">8.1 - </span>Loss function</a></div><div class="lev2"><a href="#Optimizer"><span class="toc-item-num">8.2 - </span>Optimizer</a></div><div class="lev1"><a href="#Experimenting-with-ConvNet-architectures"><span class="toc-item-num">9 - </span>Experimenting with ConvNet architectures</a></div><div class="lev2"><a href="#One-convolutional-layer"><span class="toc-item-num">9.1 - </span>One convolutional layer</a></div><div class="lev1"><a href="#Two-convolutional-layers"><span class="toc-item-num">10 - </span>Two convolutional layers</a></div><div class="lev2"><a href="#Results-for-the-microwave"><span class="toc-item-num">10.1 - </span>Results for the microwave</a></div>
This notebook is the report on the convolutional neural network implementation applied to energy disaggregation. All the codes are in the repository: https://github.com/tperol/neuralnilm. We did not transfer all the codes to this final repository in order to keep this clean. The data can also be found in https://github.com/tperol/neuralnilm. Some of the codes (mostly preprocessing of the data, although it was done for an other dataset) has been forked from Jack Kelly github Repository Neuralnilm (https://github.com/JackKelly/neuralnilm). We have implemented here all the neural network architectures tested as well as the preprocessing and the score metrics. The data preprocessing has been slightly edited from Jack Kelly's code to be fed to the Neural Network through a python generator that does the data augmentation on CPU while the GPU is training the network (see details later).
End of explanation
from __future__ import print_function
import nilmtk
from nilmtk.utils import print_dict
from nilmtk import DataSet
# import dataset
dataset = DataSet('./redd_data/redd.h5')
dataset.set_window("2011-04-01", "2011-05-01")
# look at house 2
BUILDING = 2
elec = dataset.buildings[BUILDING].elec
# plot mains
elec.mains().plot(kind = 'sum');
Explanation: Data preparation
We have power data for 6 houses from main meters that measure the aggregate power consumption of the house and from each appliance separately. For example here we show the aggregated data (sum of all appliances in the house) for house 2.
End of explanation
elec['fridge'].plot();
Explanation: One contribution to this aggregated data is the cycles of the fridge plotted below.
End of explanation
elec.submeters().appliances
Explanation: Of course there are many other appliances that contributes to the aggregated data observed above. The list of appliances listed in this house is shown here:
End of explanation
corr = elec.correlation_of_sum_of_submeters_with_mains()
print
print('------------------------------------------------------------------------------')
print('the correlation between the sum of the submeters and the main meter is:', corr)
print('------------------------------------------------------------------------------')
Explanation: The sum of all these appliances is highly correlated to the aggregated data.
End of explanation
dataset = DataSet('./redd_data/redd.h5')
dataset.set_window("2011-04-19", "2011-04-20")
dataset.buildings[2].elec.mains().plot(kind= 'sum');
plt.figure(figsize = (16,8))
dataset.buildings[2].elec.submeters().plot(kind = 'sum');
Explanation: We can visualize this looking at the main meters for one day and the sum of all the appliances for one day.
End of explanation
elec['fridge'].get_activations()[0].plot();
elec['microwave'].get_activations()[0].plot();
Explanation: The correlation is high because the 9 appliances listed above consume a significant proportion of the energy and represent a range of different power 'signatures'. Here is one signature from the fridge and one signature from the microwave.
End of explanation
dataset.buildings[2].elec.plot_when_on()
Explanation: Obviously the microwave signature is a pic of short duration while the fridge cycle is a bit longer. In this study we try to retrieve from the aggregated data the times an appliance switches on, the times it swithes off and the average power comsuption in each cycle. We call this the rectangle method. Essentially, we want to retrieve from the aggregated data, this type of representation:
End of explanation
# here are the code that perform the preprocessing of the mini batches
import nilmtk
from nilmtk.utils import print_dict
from nilmtk import DataSet
from neuralnilm.data.loadactivations import load_nilmtk_activations
from neuralnilm.data.syntheticaggregatesource import SyntheticAggregateSource
from neuralnilm.data.realaggregatesource import RealAggregateSource
from neuralnilm.data.stridesource import StrideSource
from neuralnilm.data.datapipeline import DataPipeline
from neuralnilm.data.processing import DivideBy, IndependentlyCenter
# ------------
# create dictionary with train, unseen_house, unseen_appliance
# ------------
def select_windows(train_buildings, unseen_buildings):
windows = {fold: {} for fold in DATA_FOLD_NAMES}
def copy_window(fold, i):
windows[fold][i] = WINDOWS[fold][i]
for i in train_buildings:
copy_window('train', i)
copy_window('unseen_activations_of_seen_appliances', i)
for i in unseen_buildings:
copy_window('unseen_appliances', i)
return windows
def filter_activations(windows, activations):
new_activations = {
fold: {appliance: {} for appliance in APPLIANCES}
for fold in DATA_FOLD_NAMES}
for fold, appliances in activations.iteritems():
for appliance, buildings in appliances.iteritems():
required_building_ids = windows[fold].keys()
required_building_names = [
'UK-DALE_building_{}'.format(i) for i in required_building_ids]
for building_name in required_building_names:
try:
new_activations[fold][appliance][building_name] = (
activations[fold][appliance][building_name])
except KeyError:
pass
return activations
NILMTK_FILENAME = './redd_data/redd.h5'
SAMPLE_PERIOD = 6
STRIDE = None
APPLIANCES = ['fridge']
WINDOWS = {
'train': {
1: ("2011-04-19", "2011-05-21"),
2: ("2011-04-19", "2013-05-01"),
3: ("2011-04-19", "2013-05-26"),
6: ("2011-05-22", "2011-06-14"),
},
'unseen_activations_of_seen_appliances': {
1: ("2011-04-19", None),
2: ("2011-04-19", None),
3: ("2011-04-19", None),
6: ("2011-05-22", None),
},
'unseen_appliances': {
5: ("2011-04-19", None)
}
}
# get the dictionary of activations for each appliance
activations = load_nilmtk_activations(
appliances=APPLIANCES,
filename=NILMTK_FILENAME,
sample_period=SAMPLE_PERIOD,
windows=WINDOWS
)
# ------------
# get pipeline for the fridge example
# ------------
num_seq_per_batch = 16
target_appliance = 'fridge'
seq_length = 512
train_buildings = [1, 2, 3, 6]
unseen_buildings = [5]
DATA_FOLD_NAMES = (
'train', 'unseen_appliances', 'unseen_activations_of_seen_appliances')
filtered_windows = select_windows(train_buildings, unseen_buildings)
filtered_activations = filter_activations(filtered_windows, activations)
synthetic_agg_source = SyntheticAggregateSource(
activations=filtered_activations,
target_appliance=target_appliance,
seq_length=seq_length,
sample_period=SAMPLE_PERIOD
)
real_agg_source = RealAggregateSource(
activations=filtered_activations,
target_appliance=target_appliance,
seq_length=seq_length,
filename=NILMTK_FILENAME,
windows=filtered_windows,
sample_period=SAMPLE_PERIOD
)
# ------------
# needed to rescale the input aggregated data
# rescaling is done using the a first batch of num_seq_per_batch sequences
sample = real_agg_source.get_batch(num_seq_per_batch=1024).next()
sample = sample.before_processing
input_std = sample.input.flatten().std()
target_std = sample.target.flatten().std()
# ------------
pipeline = DataPipeline(
[synthetic_agg_source, real_agg_source],
num_seq_per_batch=num_seq_per_batch,
input_processing=[DivideBy(input_std), IndependentlyCenter()],
target_processing=[DivideBy(target_std)]
)
# ------------
# create the validation set
# ------------
num_test_seq = 101
X_valid = np.empty((num_test_seq*num_seq_per_batch, seq_length))
Y_valid = np.empty((num_test_seq*num_seq_per_batch, 3))
for i in range(num_test_seq):
(x_valid,y_valid) = pipeline.train_generator(fold = 'unseen_appliances', source_id = 1).next()
X_valid[i*num_seq_per_batch: (i+1)*num_seq_per_batch,:] = x_valid[:,:,0]
Y_valid[i*num_seq_per_batch: (i+1)*num_seq_per_batch,:] = y_valid
X_valid = np.reshape(X_valid, [X_valid.shape[0],X_valid.shape[1],1])
Explanation: Data pipeline
Selecting appliances
We train each neural network per appliance. This is different from the CO and FHMM methods.
Selecting time sequences
We down sample the main meters and the submeters to 6 samples per seconds in order to have the aggregated and the submeter sequences properly aligned. We throw away any activation shorter than some threshold duration to avoid spurious spikes.
Selecting houses
We choose to train the algorithm on house 1,2,3 and 6 and test the data on house 6.
Dealing with unbalanced data: selecting aggregated data windows
We first exctract using NILMTK libraries the appliance (here the fridge) activations in the time series. We concatenate the times series from house 1,2,3, and 6 for the training set and will test on house 5. We feed to our neural network algorithm (detailed later) balanced mini-batches of data sequences of aggregated data in which the fridge is activated and sequences in which it is not activated. This is a way to deal with unbalanced data -- there are more sequences where the fridge is not activated than sequences with the fridge activated. Most of the data pipeline used is borrowed from https://github.com/JackKelly/neuralnilm.
Synthetic aggregated data
We use the method from Jack Kelly to create synthetic data. To create a single sequence of synthetic data, we start with two vectors of zeros: one vector will become the input to the net; the other will become the target. The length of each vector defines the ‘window width’ of data that the network sees. We go through five appliance classes and decide whether or not to add an activation of that class to the training sequence. There is a 50% chance that the target appliance will appear in the sequence and a 25% chance for each other ‘distractor’ appliance. For each selected appliance class, we randomly select an appliance activation and then randomly pick where to add that activation on the input vector. Distractor appliances can appear anywhere in the sequence (even if this means that only part of the activation will be included in the sequence). The target appliance activation must be completely contained within the sequence (unless it is too large to fit).
We ran neural networks with and without synthetic aggregated data. We found that synthetic data acts as a regulizer, it improves the scores on useen house.
All the codes that perform the operation described in this section 2 are in https://github.com/tperol/neuralnilm.
End of explanation
# needed to rescale the input aggregated data
# rescaling is done using the a first batch of num_seq_per_batch sequences
sample = real_agg_source.get_batch(num_seq_per_batch=1024).next()
sample = sample.before_processing
input_std = sample.input.flatten().std()
target_std = sample.target.flatten().std()
# ------------
Explanation: Standardisation of the input data (aggregated data)
A typical step in the data pipeline of neural network is the standardization of data. For each sequences of 512 samples (= 85 seconds) we substract the mean to center the sequence. Furthermore every input sequence is divided by the standard deviation of a random sample in the training set. In this case we cannot divide each sequence by its own standard deviation because it would delete information about the scale of the signal.
This is done via:
End of explanation
def scores(Y_pred, Y_test, activation_threshold = 0.1 ,plot_results= True, print_results = False):
a function that computes the classification scores with various metrics
return: dictionary with the various scores
# post process the data
np.putmask(Y_pred[:,0], Y_pred[:,0] <=0, 0)
np.putmask(Y_pred[:,1], Y_pred[:,1] >=1, 1)
np.putmask(Y_pred[:,0],Y_pred[:,1] < Y_pred[:,0],0)
np.putmask(Y_pred[:,1],Y_pred[:,1] < Y_pred[:,0],0)
np.putmask(Y_pred[:,1],Y_pred[:,2] < activation_threshold,0)
np.putmask(Y_pred[:,0],Y_pred[:,2] < activation_threshold,0)
# find negative in prediction
pred_negatives = (Y_pred[:,0] ==0) &(Y_pred[:,1] ==0)
pred_positives = ~pred_negatives
obs_negatives = (Y_test[:,0] ==0) &(Y_test[:,1] ==0)
obs_positives = ~obs_negatives
TP = obs_positives[pred_positives].sum()
FN = obs_positives[pred_negatives].sum()
TN = obs_negatives[pred_negatives].sum()
FP = obs_negatives[pred_positives].sum()
recall = TP / float(TP + FN)
precision = TP / float(TP+ FP)
f1 = 2* precision*recall / (precision + recall)
accuracy = (TP + TN)/ float(obs_negatives.sum() +obs_positives.sum() )
if print_results:
print('number of Predicted negatives:',pred_negatives.sum() )
print('number of Predicted positives:',pred_positives.sum() )
print('number of Observed negatives:', obs_negatives.sum() )
print('number of Observed positives:', obs_positives.sum() )
print('f1:', f1)
print('precision :' ,precision)
print('recall : ', recall)
print('accuracy:', accuracy)
results = {
'accuracy': accuracy,
'f1_score': f1,
'precision': precision,
'recall_score': recall}
if plot_results:
pd_results = pd.DataFrame.from_dict(results, orient = 'index')
pd_results = pd_results.transpose()
sns.barplot(data = pd_results)
return results
Explanation: Output data (start time, end time and average power)
The output of the neural network is 3 neurons: start time, end time and average power. We rescale the time to the interval [0,1]. Therefore if the fridge starts in the middle of the input sequences the output of the first neuron is 0.5. If its stops after the end of the input window the ouput of the second neuron is set to 1. The third neuron is the average power during the activation period. Of course this is set to 0 when it is not activated during the input sequence. We also post process the data by setting any start time lower than 0 to 0 and end time higher than 1 to 1. We create a average power threshold set to 0.1 that indicates if the appliance was active or not (under the threshold the appliance is considered off, above it is considered on).
Here we show as an example the input data and the ouput calculated by a trained network. We compare this with the real appliance activation. <img src='./figures/output_example.png'>
As we can see here the network does a very good job at detecting the activation of the fridge. The red line is the aggregated data. In the flat region it would be impossible to detect the activation of the fridge with human eye. We would tend to put an activation in the step region. However the network does a very accurate prediction of the activation of the fridge !
Scores - performance evaluation of the neural network
Because of the dimension of the ouput we choose classification score metrics. When the starting time and the ending time are both 0 we call this a negative. We also call negative if the power average is lower than threshold. Otherwise this is positive (the appliance is activated). We call TP true positive, TN true negative, FP false positive and FN false negative. The various metrics/scores used in this study are
$$ recall = \frac{TP}{TP + FN} $$
$$ precision = \frac{TP}{TP + FP} $$
$$ F1 = 2 * \frac{precision* recall}{precision + recall} $$
$$ accuracy = \frac{TP + TN}{P + N} $$
where P is the number of positives and N the number of negatives.
The code to compute can be found in neuralnilm/scores and reproduced partially here.
End of explanation
def train_generator(self, fold='train', enable_all_appliances=False,
source_id=None, reset_iterator=False,
validation=False ):
class method that yield generator for training the neural network
parameters:
----------
fold: 'train' for training or 'unseen_appliance' for testing/validation
source_id: if None we use both real and synthetic data with 50:50 ratio
if 1 , only use the real data
if 0, only use the synthetic data
output:
---------
generator (X_train,Y_train): a tuple with X_train being the mini-batch for training
and Y_train being the mini-batch of labels
while 1:
batch_iter = self.get_batch(fold, enable_all_appliances, source_id, reset_iterator,validation)
X_train = batch_iter.input
input_dim = X_train.shape[1]
Y_train = self._get_output_neurons(batch_iter)
yield (np.reshape(X_train, [self.num_seq_per_batch, input_dim, 1]), Y_train.astype(np.float32))
def _get_output_neurons(self, new_batch):
class method that generates the indices needed for the generator
Once the network has seen all the sample, the algorithm feed again the mini-batch
by reinitialization of the generator (there is no StopIteration error raised)
Suffling of the mini-batch is the default strategy implemented
batch_size = new_batch.target.shape[0]
neural_net_output = np.empty((batch_size, 3))
for b in range(batch_size):
seq = new_batch.target[b]
# case 1 and 2: if the signal start at 0
if seq[0] > 0:
start = 0
stop_array = np.where(seq > 0)[0]
# case 2: signal stops after 1
# set stop to the last element
if len(stop_array) == 0:
stop = seq[-1]
# case 1: signal stops before 1
else:
stop = stop_array[-1]
# calculate avg power
avg_power = np.mean(seq[start:stop + 1])
# case 3: signal starts after 0 and before 1
else:
start_array = np.where(seq > 0)[0]
if len(start_array) == 0:
# case 5: there is no signal in the window
start = 0
stop = 0
avg_power = 0
else:
start = start_array[0]
# find stop
stop_array = np.where(seq > 0)[0]
# case 4: signal stops after 1
# set to the last element
if len(stop_array) == 0:
stop = seq[-1]
else:
stop = stop_array[-1]
avg_power = np.mean(seq[start:stop + 1])
start = start / float(new_batch.target.shape[1] - 1)
stop = stop / float(new_batch.target.shape[1] - 1)
if stop < start:
raise ValueError("start must be before stop in sequence {}".format(b))
neural_net_output[b, :] = np.array([start, stop, avg_power])
return neural_net_output
Explanation: Implementation strategy for real time data augmentation
While the neural network runs an NVIDIA GeForce GT 750M (GPU) we maintain the CPU busy doing the data augmentation in real time (load aggregated data, create the synthetic data, preprocess the mini-batch to be fed to the neural network). For this we create a python generator that creates a queue of 50 mini-batch and feed them successively to the GPU for training.
The pipeline class can be found in neuralnilm.data.datapiline at https://github.com/tperol/neuralnilm and is partially reproduced here. We do the same to generate the validation and test set.
End of explanation
w_ahead = w + mu * v
# evaluate dw_ahead
v = mu * v - learning_rate * dw_ahead
x += v
Explanation: Network architecture
We use a convolutional neural network (ConvNet) to take advantage of the translation invariance. We want the ConvNet to recognize target appliance activation anywhere in the sequence. For this project we have tried multiple architecture that are reported later on. These architecture all have a first convolutional layer of filter size 3 and stride 1. We have played with both the filter size and the number of output filters on the first layer. We have found that 16 filters is a reasonable number -- increasing the number of filters in the first layer did not improve significantly the scores.
The best neural network we found consist of
Input layer: one channel and lenght of 512 samples
1D convolutional layer (filter size = 3, stride = 1 , number of filters = 16, activation function = relu, border mode = valid, weight initialization = normal distribution)
Fully connected layer (N = 1024, activation function = relu, weight initialization = normal distribution)
Fully connected layer (N = 512, activation function = relu, weight initialization = normal distribution)
Fully connected layer (N= 3, activation function = relu)
The ouput has 3 neurons activated by a relu activation function since the output cannot be negative. We have tried other networks that are reported later in this notebook. However this is the layout of the best one we found.
Loss function and optimizer
Loss function
Since the output neurons spans the real axis there is no other choice than using a L2 norm for the loss function. This is (predicted start time - true start time)$^2$ + (predicted end time - true end time)$^2$ + (predicted average power - true average power)$^2$. The total loss function is the sum of the loss function for all the sample in a mini-batch.
Optimizer
We have tried various optimizer to find the best one. We used a classical Stochastic Gradient Descent to update the weights where we feed one mini-batch chosen randomly to the neural network and then update each weight
$$w_j = w_j - \eta \frac{\partial L}{\partial w_j} $$
where $L$ is the loss function evaluated for the given mini-batch. The gradient of the loss function is calculated using the backpropagation algorithm (not detailed here for simplicity). At each epoch we decrease the learning rate $eta$ to allow the algorithm to converge towards a local minimum.
We tried a variaton of SGD by using the momentum method . This method has some physical interpretation where $\mu$ is the friction coefficient. In this case the weights are update using
$$w_j = w_j + \mu v - \eta \frac{\partial L}{\partial w_j} $$
where $v$ is the velocity. An other tested implementation is the Nesterov momentum in which, at a given position in the landscape of weight we look one step ahead with the momentum and then evaluate the gradient there to calculate the new value of the weight. A pseudo code for this would be
End of explanation
m = beta1*m + (1-beta1)*dw
v = beta2*v + (1-beta2)*(dw**2)
w += - learning_rate * m / (np.sqrt(v) + eps)
Explanation: We found by experimenting that the best optimizer is Adam (http://arxiv.org/pdf/1412.6980v8.pdf). A pseudo code for Adam is
End of explanation
# import Keras related libraries
from keras.layers import Input, Dense, Flatten, MaxPooling1D, AveragePooling1D, Convolution1D
from keras.models import Model
import keras.callbacks
from keras.callbacks import ModelCheckpoint
import time
from keras.models import model_from_json
import pickle
# ------------
exp_number = 13
output_architecture = './tmpdata/convnet_architecture_exp' + str(exp_number) + '.json'
best_weights_during_run = './tmpdata/weights_exp' + str(exp_number) + '.h5'
final_weights = './tmpdata/weights_exp' + str(exp_number) + '_final.h5'
loss_history = './tmpdata/history_exp' + str(exp_number) + '.pickle'
# ------------
# ------------
# a class used to record the training and validation loss
# at the end of each epoch
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.train_losses = []
self.valid_losses = []
def on_epoch_end(self, epoch, logs = {}):
self.train_losses.append(logs.get('loss'))
self.valid_losses.append(logs.get('val_loss'))
# ------------
# input sequence
input_seq = Input(shape = (seq_length, 1))
# first convolutional layer
conv1_layer = Convolution1D(nb_filter = 16, filter_length = 3, border_mode='valid',
init = 'normal', activation = 'relu')
conv1 = conv1_layer(input_seq)
# flatten the weights
flat = Flatten()(conv1)
# first dense layer
dense1 = Dense(1024, activation = 'relu')(flat)
# second dense layer
dense2 = Dense(512, activation = 'relu', init= 'normal')(dense1)
# output layer
predictions = Dense(3, activation = 'linear')(dense2)
# create the model
model = Model(input=input_seq, output=predictions)
# compile the model -- define the loss and the optimizer
model.compile(loss='mean_squared_error', optimizer='Adam')
# record the loss history
history = LossHistory()
# save the weigths when the vlaidation lost decreases only
checkpointer = ModelCheckpoint(filepath=best_weights_during_run, save_best_only=True, verbose =1 )
# fit the network using the generator of mini-batches.
model.fit_generator(pipeline.train_generator(fold = 'train'), \
samples_per_epoch = 30000, \
nb_epoch = 20, verbose = 1, callbacks=[history, checkpointer],
validation_data = (x_valid,y_valid), max_q_size = 50)
losses_dic = {'train_loss': history.train_losses, 'valid_loss':history.valid_losses}
# save history
losses_dic = {'train_loss': history.train_losses, 'valid_loss':history.valid_losses}
with open(loss_history, 'wb') as handle:
pickle.dump(losses_dic, handle)
print('\n saving the architecture of the model \n')
json_string = model.to_json()
open(output_architecture, 'w').write(json_string)
print('\n saving the final weights ... \n')
model.save_weights(final_weights, overwrite = True)
print('done saving the weights')
print('\n saving the training and validation losses')
print('This was the model trained')
print(model.summary())
Explanation: where dw is the gradient of the lost function with respect to the considered weight and w the considered weight. The hyperparameter for this optimizer are beta1, beta2 and eps. We set beta_1=0.9, beta_2=0.999, epsilon=1e-08 and the learning rate = 1e-3.
Experimenting with ConvNet architectures
For the implementation of the ConvNet we use Keras (http://keras.io). This is a library implemented on top of Theano and Tensorflow (in this case we use Theano to take advantage of the GPU, GPU training is not yet available on Mac OS using TensorFlow). Here we show the code ran using GPU and Keras to train our best ConvNet.
End of explanation
from keras.models import model_from_json
# load experiments number
exp_number = 13
# load the model architecture
output_architecture = './tmpdata/convnet_architecture_exp' + str(exp_number) + '.json'
# load the weights for the lowest validation loss during training
best_weights_during_run = './tmpdata/weights_exp' + str(exp_number) + '.h5'
# load the final weights at the end of the 20 epochs
final_weights = './tmpdata/weights_exp' + str(exp_number) + '_final.h5'
model = model_from_json(open(output_architecture).read())
# load intermediate or final weights
model.load_weights(best_weights_during_run)
model.compile(loss='mean_squared_error', optimizer='sgd')
# print the summary of the architecture
model.summary()
# load the loss summary (training and validation losses)
import pickle
losses = pickle.load( open('./tmpdata/history_exp' + str(exp_number) + '.pickle' , 'rb'))
# load the test set
test_set = pickle.load( open('./tmpdata/TestSet.pickle', 'rb'))
X_test = test_set['X_test']
Y_test = test_set["Y_test"]
# Here we predict the output from the neural network and show the scores
import neuralnilm.scores
Y_pred = model.predict(X_test)
scores(Y_pred, Y_test)
Explanation: One convolutional layer
Here we present the results from the ConvNet detailed in the previous cell. We first load the ConvNet architecture, the weights and the history of the training and validation losses (the network has been trained previously using GPU).
End of explanation
from neuralnilm.scores import plot_loss
plot_loss(losses)
Explanation: This is a very good F1 score, much better than with any other method tested before. Here we show the evolution of the training loss and the validation loss.
End of explanation
exp_number = 14
output_architecture = './tmpdata/convnet_architecture_exp' + str(exp_number) + '.json'
best_weights_during_run = './tmpdata/weights_exp' + str(exp_number) + '.h5'
final_weights = './tmpdata/weights_exp' + str(exp_number) + '_final.h5'
model = model_from_json(open(output_architecture).read())
# load intermediate or final weights
model.load_weights(best_weights_during_run)
model.compile(loss='mean_squared_error', optimizer='sgd')
model.summary()
# load the loss summary
import pickle
losses = pickle.load( open('./tmpdata/history_exp' + str(exp_number) + '.pickle' , 'rb'))
# load the test set
test_set = pickle.load( open('./tmpdata/TestSet.pickle', 'rb'))
X_test = test_set['X_test']
Y_test = test_set["Y_test"]
Explanation: The training and validation loss during training decreases simulateanously. There is no overfitting. We stop the simulation after 20 epochs but the model was still improving. 20 epochs on GPU took about 1.5 hours.
Two convolutional layers
Since we are not overfitting we did not add any Pooling layer or Dropout to the previous network. However we experimented an other network with an additional convolutional layer and train the model over 30 epoch. The model is the following:
End of explanation
import neuralnilm.scores
Y_pred = model.predict(X_test)
scores(Y_pred, Y_test)
from neuralnilm.scores import plot_loss
plot_loss(losses)
Explanation: There are 2 convolutional layers with 16 filters and 2 dense layers and the output layer. In this case the results are fairly similar to the network presented above.
End of explanation
from keras.models import model_from_json
exp_number = 13
output_architecture = './tmpdata/convnet_architecture_exp' + str(exp_number) + '.json'
best_weights_during_run = './tmpdata/weights_exp' + str(exp_number) + '.h5'
final_weights = './tmpdata/weights_exp' + str(exp_number) + '_final.h5'
model = model_from_json(open(output_architecture).read())
# load intermediate or final weights
model.load_weights(best_weights_during_run)
model.compile(loss='mean_squared_error', optimizer='sgd')
model.summary()
# load the loss summary
import pickle
losses = pickle.load( open('./tmpdata/history_exp' + str(exp_number) + '.pickle' , 'rb'))
# load the test set
test_set = pickle.load( open('./tmpdata/TestSet_microwave.pickle', 'rb'))
X_test_microwave = test_set['X_test']
Y_test_microwave = test_set["Y_test"]
from neuralnilm.scores import scores
Y_pred_microwave = model.predict(X_test_microwave)
scores(Y_pred_microwave, Y_test_microwave)
Explanation: Results for the microwave
Here we run the ConvNet with one layer on the microwave. We train using house 1 and 2 and test on house 5.
End of explanation
<END_TASK> |
15,972 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Could not find kernel error?
If you ever see a "could not find a kernel for this notebook" error message, it will offer you a pull down menu for you to pick a fitting kernel.
Remember, kernels are the notebook's way to find the correct interpretor for the code you write into notebook cells. And these days this can be R, Julia, Python and several other things (Find the available kernels list here).
Review on your own time
Step1: Q. What will be printed?
Q. What went wrong?
You can force variables to be certain types
Step2: Q. What will this produce?
Step3: Review continued
Step4: It's using the exact same library twice, you just told Python 2 different ways to get to it.
And there's even a way to prove it
Step5: As you can see it's the same memory address (but this number is not necessarily the same on your computer), meaning the Python interpreter uses the exact same object twice, you just gave it 2 different names.
Another syntax is available to import modules
Step6: or specific functions within a module
Step7: Today
Step8: Without a while loop
Step9: Recall the Gravitational Force Equation
$$F(r) = G \frac{m_1 m_2}{r^2}$$
Step10: Q. What will this loop do ("trace" it)?
The increment could have been done in shorthand
Step11: Q. What about this one? Can you predict any problems it may cause?
Infinite loops
Step12: Q. How does this work?
INTERLUDE
Step13: Q. What is the value of this?
Step14: Q. What is the value of this?
See how readable Python is?
Boolean expressions can be combined with "and", "or" and "not" to form compound conditional expressions.
Step15: Q. How about this?
Step16: Back to while loops
Example - User Input
Step17: Q. What happens if you enter a letter instead of a number?
Lists (Section 2.2)
Step18: Q. What will this print?
Step19: Lesson learned
Step20: Q. What will the next line produce?
Step21: List operations
Step22: Q. What will this produce?
Step23: Q. What will this give us?
Creating lists with while loops
Step24: Q. What is massRatio? | <ASSISTANT_TASK:>
Python Code:
tempF = 212.0
tempC = (5 / 9) * (tempF - 32.0)
tempC
Explanation: Could not find kernel error?
If you ever see a "could not find a kernel for this notebook" error message, it will offer you a pull down menu for you to pick a fitting kernel.
Remember, kernels are the notebook's way to find the correct interpretor for the code you write into notebook cells. And these days this can be R, Julia, Python and several other things (Find the available kernels list here).
Review on your own time: A few "last" things about types
$T_C = \frac{5}{9} \left(T_F - 32 \right)$
End of explanation
x = 45
type(x) # Gives (returns) the type of variable
x = float(x)
print(type(x))
x
Explanation: Q. What will be printed?
Q. What went wrong?
You can force variables to be certain types
End of explanation
x = 26.9
int(x)
Explanation: Q. What will this produce?
End of explanation
from math import *
import math
print(sqrt(2))
math.sqrt(2)
Explanation: Review continued: One "last" note on modules
End of explanation
id(sqrt)
id(math.sqrt)
Explanation: It's using the exact same library twice, you just told Python 2 different ways to get to it.
And there's even a way to prove it: With the id() function:
End of explanation
# Import math module and give it a new name
import math as m # Note the use of "as", a reserved word
m.sqrt(2)
Explanation: As you can see it's the same memory address (but this number is not necessarily the same on your computer), meaning the Python interpreter uses the exact same object twice, you just gave it 2 different names.
Another syntax is available to import modules:
End of explanation
# Import sqrt from math and give it a new name
from math import sqrt as sq
from math import pi as PIE
sq(2)
Explanation: or specific functions within a module:
End of explanation
x = 0 # Initialize the variable x to 0
while(x != 3): # While (as long as) x is not equal to 3
print("The value of x is", x) # Print this to the screen
x += 1 # Increment x by 1 (add 1 to x)
# REPEAT!!!
print(x)
# What is the value of x?
Explanation: Today: Loops & Lists
The point of loops is to compactly code repetitive tasks.
For example, computing the gravitational force for multiple planetary masses.
Loops are an essential programming tool (this is why we program!).
Python supports two types of loops:
while loops
for loops
While Loops (Section 2.1.2 in the book)
Basic While Loop
<Talk about how Python knows what's in the loop>
End of explanation
x = 0 # Initialize the variable x to 0
print("The value of x is", x) # Print this to the screen
x += 1 # Increment x by 1 (add 1 to x)
print("The value of x is", x) # Print this to the screen
x += 1 # Increment x by 1 (add 1 to x)
print("The value of x is", x) # Print this to the screen
x += 1 # Increment x by 1 (add 1 to x)
Explanation: Without a while loop
End of explanation
print('# Table of Gravitational Forces for Multiple Planet Masses\n')
# Initialize variables - use meters and kilograms for units
G = 6.67e-11 # Gravitational constant
mass_earth = 5.97e24 # Earth mass
mass_person = 70 # Person mass
radius_earth = 6.37e6 # Earth radius
# Begin calculation
mass1 = mass_earth
# Print a header
print('# mass1/mass_earth Force')
# The loop ends when conditional mass1 <= (10.0 * massEarth) is no longer true
while(mass1 <= (10.0 * mass_earth)): # Note the colon!
force = G * mass1 * mass_person / radius_earth**2 # All lines in the loop must be indented by
# the same amount (iPython does it automatically)
# print(str(mass1 / mass_earth) + " " + str(force))
print("{mass_ratio}\t{force:7.2f}".format(mass_ratio=mass1 / mass_earth,
force=force))
mass1 = mass1 + mass_earth # Increment by Earth's mass
# No indent! This line is executed after the loop is done
print('# Done')
Explanation: Recall the Gravitational Force Equation
$$F(r) = G \frac{m_1 m_2}{r^2}$$
End of explanation
# Note that I have to reset mass1 here!!
mass1 = mass_earth
print('# mass1/mass_earth Force')
while(mass1 <= (10.0 * mass_earth)):
force = G * mass1 * mass_person / radius_earth**2
print("{:18.1f} {:7.2f}".format(mass1 / mass_earth, force))
# mass1 = mass1 + mass_earth
mass1 += mass_earth # Shorthand version of the line above.
'# Done'
Explanation: Q. What will this loop do ("trace" it)?
The increment could have been done in shorthand
End of explanation
# How to prevent an infinite loop
maxCount = 10 # A number that is more than your loop should ever do
count = 0 # The current number your loop is on
# Adding "and < maxCount" to the end of your conditional prevents infinite loops
while(True and count < maxCount):
print("Loop count: " + str(count))
count += 1 # Increment your current loop count
Explanation: Q. What about this one? Can you predict any problems it may cause?
Infinite loops
End of explanation
5 <= 10
Explanation: Q. How does this work?
INTERLUDE: Boolean (logic) expressions (Section 2.1.3)
Boolean expressions are conditional statements. There are only
two possible values: True or False
I've capitalized True and False because these are reserved words in Python.
Q. What is the value of this?
End of explanation
5 >= 10
not 5 >= 10
Explanation: Q. What is the value of this?
End of explanation
5 <= 10 and 5 >= 10
Explanation: Q. What is the value of this?
See how readable Python is?
Boolean expressions can be combined with "and", "or" and "not" to form compound conditional expressions.
End of explanation
5 <= 10 or 5 >= 10
Explanation: Q. How about this?
End of explanation
import random
minNumber = 1
maxNumber = 10
# Get a random number between 1 and 10
randomNumber = random.randint(minNumber, maxNumber)
userGuess = -1
while(userGuess != randomNumber):
userPrompt = "Guess a number between " + str(minNumber) + " and " + str(maxNumber) + ": "
userGuess = input(userPrompt) # Prompt the user
userGuess = int(userGuess)
print("You have guessed the correct number! " + str(userGuess))
Explanation: Back to while loops
Example - User Input
End of explanation
massRatio = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]
massRatio
massRatio[3]
Explanation: Q. What happens if you enter a letter instead of a number?
Lists (Section 2.2)
End of explanation
type(massRatio[3])
Explanation: Q. What will this print?
End of explanation
massRatio.append(11.0)
massRatio
# This inserts 4.5 into index 4 of the list:
massRatio.insert(4, 4.5)
massRatio
del massRatio[4]
Explanation: Lesson learned: Python is zero-index based
Modifying lists
End of explanation
massRatio
Explanation: Q. What will the next line produce?
End of explanation
# We can find out its length with len(object)
len(massRatio)
# Python uses [] to access elements and () to perform a function on an object.
massRatio = massRatio + [12.0, 13.0, 14.0]
massRatio
massRatio.extend([15.0, 16.0, 17.0])
print("Extend", massRatio)
massRatio.append([18.0, 19.0, 20.0])
print("Append", massRatio)
print(massRatio[17][1])
Explanation: List operations
End of explanation
massRatio.index(12.0)
# And, this fails
massRatio.index(20.0)
# We can check if there is an element in a list. The result of the check
# is boolean: True or False.
14.0 in massRatio
99.0 in massRatio
massRatio
# Negative indices start counting from the right (the end) of a list:
massRatio[-4]
Explanation: Q. What will this produce?
End of explanation
# Initializations first
massRatio = [] # Creates an empty list
massRatioValue = 1.0 # For the conditional
massRatioMax = 5.0 # Also for the conditional
userInput = "BIG NOPE"
# And the while loop
while(userInput != "N" and massRatioValue <= massRatioMax): # Remember the colon!
# Remember to indent!
massRatio.append(massRatioValue)
massRatioValue += 1.0
userInput = input("Add another mass ratio value? ")
userInput = userInput.upper()
print("Finished creating the list massRatio!")
Explanation: Q. What will this give us?
Creating lists with while loops
End of explanation
massRatio
Explanation: Q. What is massRatio?
End of explanation
<END_TASK> |
15,973 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD [1] [2] [3]_.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
Step1: Load sample subject data
Step2: Plot the raw data and CSD-transformed raw data
Step3: Also look at the power spectral densities
Step4: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
Step5: First let's look at how CSD affects scalp topography
Step6: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution | <ASSISTANT_TASK:>
Python Code:
# Authors: Alex Rockhill <aprockhill206@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD [1] [2] [3]_.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
End of explanation
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif',
preload=True)
events = mne.find_events(raw)
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=False,
exclude=raw.info['bads'])
raw.set_eeg_reference(projection=True).apply_proj()
Explanation: Load sample subject data
End of explanation
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
Explanation: Plot the raw data and CSD-transformed raw data:
End of explanation
raw.plot_psd()
raw_csd.plot_psd()
Explanation: Also look at the power spectral densities:
End of explanation
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
Explanation: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
End of explanation
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
Explanation: First let's look at how CSD affects scalp topography:
End of explanation
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2))
Explanation: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution:
End of explanation
<END_TASK> |
15,974 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Unsupervised learning
Step1: First, we start with some exploratory clustering, visualizing the clustering dendrogram using SciPy's linkage and dendrogram functions
Step2: Next, let's use the AgglomerativeClustering estimator from scikit-learn and divide the dataset into 3 clusters. Can you guess which 3 clusters from the dendrogram it will reproduce?
Step3: Density-based Clustering - DBSCAN
Another useful approach to clustering is Density-based Spatial Clustering of Applications with Noise (DBSCAN). In essence, we can think of DBSCAN as an algorithm that divides the dataset into subgroup based on dense regions of point.
In DBSCAN, we distinguish between 3 different "points"
Step4: Exercise
<div class="alert alert-success">
<b>EXERCISE</b> | <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
n_samples, n_features = X.shape
plt.scatter(X[:, 0], X[:, 1], c=y);
Explanation: Unsupervised learning: Hierarchical and density-based clustering algorithms
In a previous notebook, "08 Unsupervised Learning - Clustering.ipynb", we introduced one of the essential and widely used clustering algorithms, K-means. One of the advantages of K-means is that it is extremely easy to implement, and it is also computationally very efficient compared to other clustering algorithms. However, we've seen that one of the weaknesses of K-Means is that it only works well if the data can be grouped into a globular or spherical shape. Also, we have to assign the number of clusters, k, a priori -- this can be a problem if we have no prior knowledge about how many clusters we expect to find.
In this notebook, we will take a look at 2 alternative approaches to clustering, hierarchical clustering and density-based clustering.
Hierarchical Clustering
One nice feature of hierachical clustering is that we can visualize the results as a dendrogram, a hierachical tree. Using the visualization, we can then decide how "deep" we want to cluster the dataset by setting a "depth" threshold. Or in other words, we don't need to make a decision about the number of clusters upfront.
Agglomerative and divisive hierarchical clustering
Furthermore, we can distinguish between 2 main approaches to hierarchical clustering: Divisive clustering and agglomerative clustering. In agglomerative clustering, we start with a single sample from our dataset and iteratively merge it with other samples to form clusters -- we can see it as a bottom-up approach for building the clustering dendrogram.
In divisive clustering, however, we start with the whole dataset as one cluster, and we iteratively split it into smaller subclusters -- a top-down approach.
In this notebook, we will use agglomerative clustering.
Single and complete linkage
Now, the next question is how we measure the similarity between samples. One approach is the familiar Euclidean distance metric that we already used via the K-Means algorithm. As a refresher, the distance between 2 m-dimensional vectors $\mathbf{p}$ and $\mathbf{q}$ can be computed as:
\begin{align} \mathrm{d}(\mathbf{q},\mathbf{p}) & = \sqrt{(q_1-p_1)^2 + (q_2-p_2)^2 + \cdots + (q_m-p_m)^2} \[8pt]
& = \sqrt{\sum_{j=1}^m (q_j-p_j)^2}.\end{align}
However, that's the distance between 2 samples. Now, how do we compute the similarity between subclusters of samples in order to decide which clusters to merge when constructing the dendrogram? I.e., our goal is to iteratively merge the most similar pairs of clusters until only one big cluster remains. There are many different approaches to this, for example single and complete linkage.
In single linkage, we take the pair of the most similar samples (based on the Euclidean distance, for example) in each cluster, and merge the two clusters which have the most similar 2 members into one new, bigger cluster.
In complete linkage, we compare the pairs of the two most dissimilar members of each cluster with each other, and we merge the 2 clusters where the distance between its 2 most dissimilar members is smallest.
To see the agglomerative, hierarchical clustering approach in action, let us load the familiar Iris dataset -- pretending we don't know the true class labels and want to find out how many different follow species it consists of:
End of explanation
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
clusters = linkage(X,
metric='euclidean',
method='complete')
dendr = dendrogram(clusters)
plt.ylabel('Euclidean Distance');
Explanation: First, we start with some exploratory clustering, visualizing the clustering dendrogram using SciPy's linkage and dendrogram functions:
End of explanation
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3,
affinity='euclidean',
linkage='complete')
prediction = ac.fit_predict(X)
print('Cluster labels: %s\n' % prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
Explanation: Next, let's use the AgglomerativeClustering estimator from scikit-learn and divide the dataset into 3 clusters. Can you guess which 3 clusters from the dendrogram it will reproduce?
End of explanation
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=400,
noise=0.1,
random_state=1)
plt.scatter(X[:,0], X[:,1])
plt.show()
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2,
min_samples=10,
metric='euclidean')
prediction = db.fit_predict(X)
print("Predicted labels:\n", prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
Explanation: Density-based Clustering - DBSCAN
Another useful approach to clustering is Density-based Spatial Clustering of Applications with Noise (DBSCAN). In essence, we can think of DBSCAN as an algorithm that divides the dataset into subgroup based on dense regions of point.
In DBSCAN, we distinguish between 3 different "points":
Core points: A core point is a point that has at least a minimum number of other points (MinPts) in its radius epsilon.
Border points: A border point is a point that is not a core point, since it doesn't have enough MinPts in its neighborhood, but lies within the radius epsilon of a core point.
Noise points: All other points that are neither core points nor border points.
A nice feature about DBSCAN is that we don't have to specify a number of clusters upfront. However, it requires the setting of additional hyperparameters such as the value for MinPts and the radius epsilon.
End of explanation
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1500,
factor=.4,
noise=.05)
plt.scatter(X[:, 0], X[:, 1], c=y);
# %load solutions/20_clustering_comparison.py
Explanation: Exercise
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Using the following toy dataset, two concentric circles, experiment with the three different clustering algorithms that we used so far: `KMeans`, `AgglomerativeClustering`, and `DBSCAN`.
Which clustering algorithms reproduces or discovers the hidden structure (pretending we don't know `y`) best?
Can you explain why this particular algorithm is a good choice while the other 2 "fail"?
</li>
</ul>
</div>
End of explanation
<END_TASK> |
15,975 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Estimating At-Work Subtour Frequency
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
Step4: Utility specification
Step5: Chooser data
Step6: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
Step7: Estimated coefficients
Step8: Output Estimation Results
Step9: Write the model estimation report, including coefficient t-statistic and log likelihood
Step10: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode. | <ASSISTANT_TASK:>
Python Code:
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
Explanation: Estimating At-Work Subtour Frequency
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
os.chdir('test')
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
modelname = "atwork_subtour_frequency"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
Explanation: Load data and prep model for estimation
End of explanation
data.coefficients
Explanation: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
End of explanation
data.spec
Explanation: Utility specification
End of explanation
data.chooser_data
Explanation: Chooser data
End of explanation
model.estimate(method='SLSQP')
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
model.parameter_summary()
Explanation: Estimated coefficients
End of explanation
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
Explanation: Output Estimation Results
End of explanation
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation
<END_TASK> |